I started going through the docs. This language starts off with some heft sells - prevent errors, increase test coverage, etc. Then the docs mostly cover things like "you can call an array an array or a list" and stuff that isn't very interesting to me.
It doesn't even show methods or things like that. I think 90% of the docs I've read so far should have just been a single page with one liners one after the other, and less interesting stuff that's got more depth should go further back. I don't think showing nested arrays and dictionaries is interesting to me after you've just told me about some super interesting testing / correctness stuff - lead with that!
I skipped ahead to Decision Making. I think the example is so strange and contrived - three functions, each returns some number above 100, some logic after that that feels arbitrary? I would do something at least a little more universal like fib.
The end of this section states "100% testable" - I'm confused. Why is this testable?
I could not write one unit test to have 100% code coverage, not even line coverage.
If I, for example, wrote a test that used the code provided, with an assertion
assert add(3, 5) == 101
I would not hit f2 or f3, so I'm confused by:
"A single unit test is enough to have 100% coverage on functions, always."
That unit test was definitely not enough for 100% coverage.
I skimmed the rest.
For a language that states it's great for testing and preventing errors, I am at a loss as to how one would write a test for it, or handle an error.
This looks like a pretty neat language, but you really gave me such a good hook and then no follow through.
It would work if you somehow could get the value of `z` (the "return" value of `add` in the example). Then you have a unit test for `add` on its own, and if you also unit test `f1`, `f2` and `f3` (which it calls depending on the value of `z`), you'd have almost full unit test coverage.
(You're not testing the conditions that lead to calling one of the `f` functions. Maybe it's chalked up to integration testing?)
But yeah, I share much of your frustration. The Decision making section should be the first thing, and it should have a test example. Longer non-trivial programs examples should be contrived to reassure people that this not utterly impractical.
Since there are no if's (except at the end), every line is executed. Since all if's at the end can be placed on a single line, you have 100% line coverage. I don't think this language is very serious.
As you say, this doesn't sound right. If you force the programmer to write branch-free code (this appears to be the founding principle of the language [0]), that doesn't mean your boundary-value analysis [1] problems go away, it means you've made them less explicit.
As you say, there's full line coverage for tests, but no guarantee of full equivalence-partition coverage. It's quite possible for branch-free code to fail on certain values. ctrl-f for overflow in [2]
The 'Showcase' link is broken, the GitHub repo seems to show no examples, and the linked StackOverflow chaos tag is never used in relation to this language, so as you say, I'm not sure how serious this language really is. Fun idea though.
(It's entirely possible I'm misusing the term boundary-value analysis. Corrections welcome.)
Did anyone else think this was a joke language at first? My initial impression was that it was an esoteric language which was tongue-in-cheek passing of its oddities as language features.
Firstly, there's the name: chaos, which evokes the opposite of what my code to look like.
Secondly, the first three "features" all come across as deliberately goofy to me. The first two promise seeming implausible things: zero cyclomatic complexity and always 100% test coverage. The justifications for these promises seem bizarre: only function returns allow decision making? (A seemingly terrible place to make decisions) One test is always sufficient to have coverage? It isn't object-oriented? (Obviously, a lot more to functional then that)
> Firstly, there's the name: chaos, which evokes the opposite of what my code to look like.
The full name is "The Chaos Programming Language", which I understand as "the language to program chaos". It's in line with the motto "Turn chaos into magic!", as you program to turn an arbitrary state (chaos) into what you want (order / magic).
I still think it's a joke language, but this part is well-thought.
The fact that there are two aliases for each (most?) builtin types immediately turned me off. Maybe there’s a good reason for that particular feature of the language, but I would not throw that at a prospective user in the getting started guide.
> Decision making(a.k.a. control structures) in Chaos language is only achievable on function returns for the sake of zero cyclomatic complexity.
...
> At first glance, defining the control structures in this way might seem so unnecessary or inconvenient but this is the pinnacle of writing 100% testable, clean and error-free code in any programming langauge by far.
I’m skeptical about this claim. I’m not sure I believe 100% error-free is possible in any language.
I agree with the other commenter about the comma in this sentence. But I'd also like to point out that 100% error-free code is possible. There's a whole branch of computer science dedicated to it: formal verification. I personally have used Coq to write certified code in the past, and I'm quite confident that my code was 100% error-free.
Of course, there are some qualifications one should make regarding such
a claim: you have to trust that your specification is "correct", that your hardware is functioning correctly, that the proof checker didn't accept a bogus proof, that the underlying logic (e.g., the calculus of inductive constructions) is consistent, etc. That's a lot of things to trust, but the point is that you don't have to trust yourself.
I really can't trust myself to write a specification. I know this because I have written errors in tests before - even when all I'm doing is trying to work out some properties that the answer ought to have, I can still make a mistake. It's just usually less likely that I would make a mistake, because the specification is simpler than the algorithm.
So, it is not really fair to downplay the chance that you could have a wrong specification, although it would be equally wrong to use that as an excuse to downplay the value of Coq.
You can't prove that something is secure, because to be applicable in mathematics/computer science there has to exist a precise definition.
If someone is just saying "This is a formally verified protocol" without what they actually checked for, they are salespeople not mathematicians.
"The authors of the KRA paper were able to understand what the proofs were about, and why they don’t cover the KRACK vulnerability. Even though the original proofs didn’t reveal security flaws, a principled approach would use these proofs in order to discover where to look."
But then any more or less complex code that involves conditions will be spread over multiple functions. Even if that makes it testable, it might become incomprehensible very quickly.
Edit: If code needs certain number of conditions, they will be there in this or that form, that cannot be avoided. I would like to see real benefits of their choice in real non-trivial examples.
It's the first time in many years that I look at a new language and it feels good. I had no WTF moment, thinking "how could a human mind came up with this?" or "why do they want us to feel miserable doing that?"
I only wonder how it would feel using it on real world problems.
One question to the author: is there string interpolation?
world = "world"
print "hello #{world}"
Finally, the links in the Docs section of the footer are 404 and the link to GitHub links the home page of GitHub, not the project.
Cyclomatic complexity is a whole-program property. Your functions are just nodes in the control flow graph, where the real complexity appears. One-branch functions are definitely a good hack, though. :)
Numbers are literally never immutable in any language that isn't designed intentionally for comedy. Variables of a number type may be mutable, but the numbers themselves never mutate. Strings are typically immutable in most languages, but that does vary some (generally surprising people who expect primitives to be immutable).
Ah, I did not dive that deep. Now I am curious what the impact of deep copies on performance is, and if the language employs any clever optimizations to improve performance.
The language at an early development stage. So "How immutable should variables be?" is open to discussion. You might want to open an issue and propose this in the issues section: https://github.com/chaos-lang/chaos/issues
Reassignment is mutation. It's really not... on a spectrum of mutability. JavaScript const objects allowing mutation of internal data is vaguely on that spectrum, even though it's misleading and confusing. Literally reassigning a variable isn't even up for debate.
Java strings are immutable. Java variables with a string type are not. It's really not up for debate. You can't mutate a string in Java (you can in Ruby), but you can mutate a string variable by reassigning it.
I have never worked with Haskell but admire it from afar, and I'm honestly astonished that the REPL works this way. I would assume this would error at the compiler for a normal program, and that the REPL runtime would enforce the same constraints.
You're just shadowing the previous binding with a new one, not actually mutating anything[1]:
λ> let foo = 1
λ> let foo = 2
λ> foo
=> 2
could be rewritten[2] as
λ> let foo = 1 in ( let foo = 2 in foo )
=> 2
which makes the semantics more clear.
This idiom is also quite common in Clojure, another famously immutable-first language:
(let [foo 1
foo 2]
foo)
;; => 2
In my opinion (predominantly informed by my experience with those two languages contrasted with the usual suspects from mutable-OOP-land), the benefit of immutability isn't what's happening in your own local scope, since it's typically quite easy to track what your immediate context is doing to a variable. Instead, it's the language-enforced promise that the only changes to local values can come from local actions: your function and method calls can never have spooky side effects, nor can other threads if you're in some hellish concurrent environment.
In this light, Chaos's apparent syntactic sugar of
a[15] = 44
to mean (using some hand-wavey pidgin)
a = a.updateAt(15,44)
seems quite reasonable and fully in the spirit of immutability as a meaningful language feature.
[1] In my scratchwork project, my repl actually yells at me with an annoying warning about this:
<interactive>:8:5-7: warning: [-Wname-shadowing]
This binding for ‘foo’ shadows the existing binding
defined at <interactive>:6:5
[2] I believe that the haskell repl actually functions like a do-block, so the pedantically correct desugaring possibly involves lambdas and bind, but that's not really an interesting distinction here IMO and makes the example less clear.
> the benefit of immutability isn't what's happening in your own local scope, since it's typically quite easy to track what your immediate context is doing to a variable.
From my experience any sufficiently badly written code is hard to understand even in the local scope. The bar for hard to understand code isn't that high because the complexity of well written code is very close to zero.
Most code that avoids reassignment can be read from top to bottom. Code with local mutation can require backtracking and then the complexity can start exploding but that doesn't necessarily apply to code with mutation across functions. A simple list.add() doesn't cause anyone's brain to melt.
A classic is that someone writes a for loop like this: for(;i < 10; i++). You now need to go back and check what the start value of i is even though 99% of the time it is 0. Then you need to check if the counter is used for anything after the loop has exited because sometimes a loop is just trying to find the first element that matches a predicate. Now imagine if the i variable gets reused by a second loop. You now must check if the i variable is resuming the loop at the same position by backtracking. Every single line of code has the potential to amplify the complexity of the entire function.
All that meaningless complexity has no reason to exist. It doesn't provide any value and it doesn't cost anything to avoid it.
I guess an argument can be made that forcing a restart of the REPL is a pretty harsh punishment, but I would expect Haskell (of all languages) to at least require some kind of tabula rasa.
If it makes you feel better node doesn't allow you to reassign const declarations in the REPL, which is actually very annoying if you forget about that behaviour and tend to default to writing const.
This is doing something different though. It first declares a as a num, then assigns it. There's not really a great equivalent in Haskell syntax, but in C:
> Decision making(a.k.a. control structures) in Chaos language is only achievable on function returns for the sake of zero cyclomatic complexity.
This looks as original and language-defining as Rust's borrow checker. I really wish authors changed their docs accordingly. Everything else is pretty standard on a conceptual level, but this still has me thinking whether it's a work of pure genius, a joke language or both.
Yes, you are correct. The very intuitive approach we took on decision making is the key point of the language.
I came up with this idea of designing a language with no "if" after dealing with various untestable codebases for years. They were literally untestable because of the technical debt caused by the extensive amount of "if" usage in the function bodies, especially the guard clauses makes it 1 "if" minimum for almost every functions. So at the end I have exhausted because of the if(s) and decided to create a language that forces you to have minimum technical debt.
I'm still not convinced it doesn't just sweep the complexity under the rug. Ultimately you still need to branch unless you only want the language to do pure mathematical computation or something like that. Only now every branch needs to be hidden in a function call.
I have a large Elixir project with less than 10 branches (if then else) and they were written by other developers. Conditional logic is done either in pattern matching in function definitions (and some guard) or case/cond or with/else.
I guess case/cond can be implemented with the return mechanism of Chaos. Maybe it's going to feel weird, maybe not.
There is no pattern matching and I got the feeling that's going to complicate things but again, it can be emulated to some degree but the return mechanism. Instead of
str def a(n)
num x = n
end {
x == 0 : "a"
x == 1 : "b"
default: "c"
}
Having to define that x variable feels like a waste of code and reasoning on x instead of n (the argument of the function) is a pity. Maybe the function body can be empty. I didn't install the language.
I wonder how that scales on real world cases, for example matching the internals of some complex data structure.
I am now wondering what the benefit of Chaos' approach is. Chaos is making some pretty substantial tradeoffs. Going with conventional pattern matching (as part of the declaration) isn't such a big leap.
That doesn't seem possible. A function with no flow control statements has a cyclomatic complexity of 1. I.e. function retVal(x) { return x } has a cyclomatic complexity of 1 because there is 1 path through.
Even ignoring that issue, even if the control flow statement is in the return, it still has a cyclomatic complexity of 2. I.e. function isEqual(x, y) { return x == y } still has a cyclomatic complexity of 2, you're just hiding it. The actual logic flow in terms of nodes is
It's entirely possible for a single line to have a cyclomatic complexity of more than 1. Ternaries, for example, would have a cyclomatic complexity of 3.
> This looks as original and language-defining as Rust's borrow checker.
I'm not so sure it's original; many compilers work on ASTs whose nodes are a sequence of non-branching operations terminated by a branch. Though people don't use these internal representations as "programing languages" but afaik they technically are.
Creative idea! There are no branches or closures, but the language does allows loops and recursion. These have non-zero cyclomatic complexity. A loop with either 0 or 1 runs is basically an if statement.
Seems like we have a fellow Chaos Magick practitioner right here. Funny that the creator named the language "Chaos", the module manager "Occultist" and the modules "Spells". Now I can't dislike it even if I found some of the language features quite alien.
Yeah, you have to dig several pages deep into the tutorial before you see meaningful code examples that distinguish it from other languages. This is a mistake that language websites make way too often.
This alone made me stop reading. There is absolutely no reason for this. It's an extra thing I have to think about when reading/writing code with literally zero benefit. Now people new to the language have extra stuff to learn. Languages need less ways to do the same thing. Not more.
Looks neat. One recommendation, before the section "Influenced by", could you put a code snippet of what Chaos would look like? It can be HelloWorld, but a more real-life example would be appreciated. (A few example sites that do this: https://vertx.io, http://www.scala-js.org)
I appreciate the minor insight provided by seeing a language where control structures can only happen upon function return. I've had to teach recursion to many beginning students, and I have a feeling that seeing something like the Chaos language would help the concept of recursion gel with them. I'm very unsure that the language itself is useful or good, but I was certainly delighted by that small bit of insight.
Maybe some unix pros here can tell me, why the occultist package manager not installed via sudo, but the language itself is? It seems like an unnecessary security risk to run sudo on a random download.
Thanks a lot for finding the way Chaos language handles the decision making, interesting.
There are a lot of alternative keywords, especially for type safety and function definition related keywords, with the purpose of attracting programmers from other programming languages and making them feel at home. At least that was the intention.
No language I have ever worked with allowed a codebase with intermixed keywords for defining a function. In terms of feelings, it's as alien as it can get.
Interesting idea. I mostly agree with limiting the amount of control flow. I generally try to avoid as many ifs as possible.
It isn't that alien of an idea in my mind as most people here in the comments seem to argue. I suppose if you used a function with only an "end" clause a function would end up mostly like how one generally writes functions in most functional languages, where a lot of functions are fully comprised of a switch/match on the input.
Maybe because the syntax is very Ruby-like you see a lot of people here expecting more object-oriented design.
I think the concept of having conditionals at the end of functions is more interesting than people here are giving credit. It could go a long way to getting rid of incomprehensible nested if statements.
But all the aliases need to go. What's the point of having three aliases for an import statement other than to confuse new programmers. It seems like one of those ideas that seems good on paper, to include words from first languages, but in the end it'll just end up confusing everybody.
You know those programmers that confuse CS and programming? This is what I imagine they would come up with as they just don't have the tools of formality.
It doesn't even show methods or things like that. I think 90% of the docs I've read so far should have just been a single page with one liners one after the other, and less interesting stuff that's got more depth should go further back. I don't think showing nested arrays and dictionaries is interesting to me after you've just told me about some super interesting testing / correctness stuff - lead with that!
I skipped ahead to Decision Making. I think the example is so strange and contrived - three functions, each returns some number above 100, some logic after that that feels arbitrary? I would do something at least a little more universal like fib.
The end of this section states "100% testable" - I'm confused. Why is this testable?
I could not write one unit test to have 100% code coverage, not even line coverage.
If I, for example, wrote a test that used the code provided, with an assertion
assert add(3, 5) == 101
I would not hit f2 or f3, so I'm confused by: "A single unit test is enough to have 100% coverage on functions, always."
That unit test was definitely not enough for 100% coverage.
I skimmed the rest.
For a language that states it's great for testing and preventing errors, I am at a loss as to how one would write a test for it, or handle an error.
This looks like a pretty neat language, but you really gave me such a good hook and then no follow through.