Think about where you would use CLJS - my use cases would not have an issue with extra 100kb of code - you don't use CLJS to do jQuery animations on your web page - you would use it to build complex single page apps or write server side code/scripts.
You don't use clojure for performance anyway, it's going to be slower by default (because of immutability/persistent data structures, and yeah I know about react benefits with immutability that's not my point - you're still going trough a lot more memory and stressing GC) - you use it to help you deal with your code because of it's semantics.
But in reality last time I tried CLJS I didn't really feel like it delivers on the productivity part and it's mostly because of implementation issues. IMO the decision to implement CLJS on top of Closure compiler and in Clojure (instead of going for a self hosting compiler) was a mistake - you can't overstate the value of REPL and fast iteration in a language like Clojure - and my last attempt to use CLJS the compiler/REPL environment far from what I would consider fast iteration : compiler took forever to start because of JVM and while it could run as a service it needed to be restarted frequently enough that it mattered, REPL was very unstable it would just die randomly - sometimes you'd need to refresh the page, sometimes you'd need to restart the server process. Oh and don't even get me started on the voodoo needed to get the damn thing running - install piggieback, then install austin and then add this weasel thingy then configure the server process all so you can get a halfworking repl and pray it doesn't break because good luck figuring out what's actually going on. Compare this to JS where I just go in to the devtools panel and test my code.
Maybe, but things progressed way further outside of CLJS space. If you told me this 2 years ago when I was in to it I would be jumping right on it - back when people were saying coffescript fixed JS problems :D
Right now JS has persistent datastructure libraries and TypeScript is a huge productivity boost - tooling is top notch - it makes JS manageable, once it gets async/await (which it should in the next couple of months) I'll be pretty happy with JS development story.
I'll miss some niceties like collection operations, homoiconicity and macros but on the other hand I have working optional type checking, excellent tooling and I don't have to code in AST serialization format with macros :D
Interesting. Do you have a link to a very basic "get-started" in this space? Right now, figwheel / om or figwheel / reframe are very quick to get started with (although familiarity definitely plays a part..). Last time I looked at js there were 5 or 6 or so flux implementations competing for mindshare and I had to stumble my way through setting up a project with webpack / babel..
ClojureScript applications start smaller than jQuery so I'm not really sure what you mean. If you're already using jQuery then ClojureScript applications are not large in comparison.
As far as your experiences sounds like a lot of your problems didn't have anything to do with ClojureScript and everything to do with trying to getting overly complicated 3rd party setups going. I've only heard woes from people trying to get piggieback + Cider going. I now recommend that people avoid those entirely for ClojureScript anyway.
I admit ClojureScript did have startup time issues but these were addressed earlier this year. ClojureScript can now compile "Hello, world!" from cold JVM boot in ~1.5 seconds. Of course auto-building has be subsecond for ages. REPL sessions start similarly fast and can go for hours without encountering problems.
ClojureScript may still not be your cup of tea but your complaints are not the sort I hear from users much these days.
This needs to be stressed. Performance in mobile web is already bad enough. If a tool has an optional, manual "improve performance" step, that is bad, it needs to be the default case and it needs to be a good citizen within the environment.
> The first question I usually get ... is “Why not Haskell?”.
> The answer is JavaScript.
My current favorite language for this is Elm[1]. It compiles to JS, is based on Haskell, and has a few simple differences between either[2].
Elm doesn't let you use the tons of available Haskell libraries, and makes it slightly painful to integrate with JS. But it's the coolest thing for the web so far. And here's why:
It uses a completely declarative state machine to model a fully interactive and complete GUI! This is the dream React.js and Om didn't even know they were heading towards! And yes, it is as good as it sounds.
3. a technique for modeling a GUI as a declarative state machine
PureScript only competes with #1 and #2 here. Where Elm really shines is #3, which #2 helps with a lot.
Technically #3 could be done in any language. But it really helps to model it in such an expressive language with immutable data and no side-effects (except via Signals).
I won't argue that Elm does anything other than an exemplary job at #3, but it is worth noting that there are some interesting efforts to build similar functionality in PureScript libraries:
To me, this is the benefit of PureScript - yes, you have to do a little more work, because you don't get these things for free from the compiler and tools, but you gain complete control over your application structure. You're not forced to work in some ambient Signal category.
You can probably implement Elm in PureScript pretty trivially, AFAICT. AFAIK Elm doesn't have any static guarantees about lack-of-space leaks or lack-of-time leaks. Or, maybe it just lacks higher-order-signals? (Which would mostly prevent those sorts of things happening). I found Neel Krishnaswami's papers on a static way to decide/prevent these things statically very interesting. I haven't found any implementation of his ideas, but they seem pretty solid: http://www.cs.bham.ac.uk/~krishnan/
Could anyone summarize how well PureScript and Elm treat sourcemaps/debugging in the browser? For a while I thought that js_of_ocaml didn't support sourcemaps, but it turns out, one of my dependencies wasn't compiled with debug flag (-g) and I was able to get a pretty good sourcemaps/debugging experience in Chrome dev tools once I fixed that issue. Is there something js_of_ocaml can learn from PureScript/Elm's JS compilation toolchain? Or should we look to CLJS (which I'm also really excited about) as the best example?
I can't really comment on source maps in Elm, but the approach in PureScript has been to generate clean, readable JS which is debuggable directly. Source maps are on the roadmap, but not really a priority right now. I haven't heard any complaints about the ability to debug compiled PureScript yet.
That's a nice approach for debugging! If the mapping is close enough, I don't mind reading the JS output. I'm curious about the general approach to compilation, though. It seems like a statically typed language could take advantage of the knowledge of types to generate an even more efficient version of the program that uses typed arrays and views (though, yes, it would require implementing a garbage collector unless relying on some kind of WeakMap in the JS engine).
I've heard of garbage collected languages compiling to LLVM which would allow Emscripten to assist you, but I've also heard that LLVM has a really hard time with GC languages.
Right now, the translation is very direct. My basic rule of thumb is - only perform those optimizations which the user opts into. Some things are standard though, like a few inlining rules and tail call elimination, but the plan is to provide a rewrite rules engine so that the developer can be as fine-grained as they like when it comes to optimizations.
As far as I can tell Elm is more of a GUI focused language, when I was evaluating AltJS languages I was looking for something a bit more low level. The JS and DOM bindings of js_of_ocaml were what really sold me although I didn't mention this in the post.
hey didyoucheckthe I'm working on a new programming language that I think you may be interested if you like Elm. Couldn't find your email in your profile, but mine is cammarata.nick@gmail.com. I'd love to show you
> Linux-based distributions stack up software -- the Linux kernel, the X Window System, and various DEs with disparate toolkits such as GTK+ and Qt -- that do not necessarily share the same guidelines and/or goals. This lack of consistency and overall vision manifests itself in increased complexity, insufficient integration, and inefficient solutions, making the use of your computer more complicated than it should actually be.
> Instead, Haiku has a single focus on personal computing and is driven by a unified vision for the whole OS. That, we believe, enables Haiku to provide a leaner, cleaner and more efficient system capable of providing a better user experience that is simple and uniform throughout.
After reading this, I full-heartedly support this project.
Man, I've been sold on that design concept since the mid '90s.
It's pretty amazing how much power old code and old designs have over us. I don't use Windows, so I can't comment on that but as far as I know every other OS still employs those designs which the BeOS developers identified as making things far more complicated than they should actually be.
To this day, I'm not sure that I've ever met anyone who could produce a credible explanation that, however complicated modern Operating Systems have become, they're as simple as they can be and any reduction in complication (this isn't really complexity) would certainly require a reduction in necessary features, usability, reliability, or security.
The only guys I even hear talking about anything even vaguely related are the Library OS/Rump Kernel/UniKernel crowds and they're focusing on applications running in Data Centers.
So, all real-life builds that anyone would care about.