Hacker Newsnew | past | comments | ask | show | jobs | submit | discmonkey's commentslogin

Assuming that the plugin is enabled for the free version, CLion is also amazing for Rust. Thanks Jetbrains!

Here's hoping this won't be abused by smaller companies that will no longer want to pay for the actual subscription. I also wonder if they are moving towards a different funding model, since the IDE space is pretty competitive with a free alternative (VSCode) out there.


RustRover is already free for Non-Commercial use! I think it is the best IDE for Rust dev.


Yes, it is quite nice. That being said I keep a little statistic about IDE usage at the Rust events I attend. I have observed RustRover or CLion only three times at the 48 events I've recorded. One of these three was an event at JetBrains. To be fair, I started my notes long before RustRover existed.

neovim is marginally more popular.

vscode is the crushing majority.


[flagged]


Using "Java-based" as a smear is pretty silly. The IntelliJ platform is an extremely solid piece of software, clunkiness and all. It does everything pretty well, with very little need for config fiddling or third-party plugins.


I won't argue with the logic, if it works, it works. But still, I tend to react negatively to "Java-based" things. I think it comes from fiddling with Xms Xmx in the past with Elasticsearch which gave me the sense that the JVM is a beast. Now you want me to use the same beast in my editor? Neovim feels a lot lighter in this regard.


First-party LSPs are great and all but some form of text editor is also at least marginally useful for writing code.


I got 'lazyvim' (a neovim distro) up and running, installed LSPs for my languages of choice (mostly java and go) and it seems like a completely broken experience to me. I don't even get any error messages telling me anything is wrong, it just doesn't work.


You may want to consider learning how to debug software before writing software.


This is the most silly comment I’ve read in a while.

If your software requires user to debug something for you then you’re the one who should consider learning how to write software before writing software.


I use new tools all the time, almost on a weekly basis. Most of those tools have bugs, software isn't perfect. Our job is to be good at finding and fixing issues in tools, and the beauty of open source is having access to fix bugs were you find them.

It's extremely odd to consume free and open source software in any capacity and just not have cultivated the understanding and desire to improve all of the things you touch. It's one of the worst trends in programming.

Imagine a world were every tool you used including your programming language cost money, and you had to spend 5000$ a month just to be a developer, and any bug you encountered could only be solved through a support ticket. That's my idea of a terrible nightmare, and why I wouldn't touch something like IDEA with a 10 foot pole.


You know, it's funny, when I was a cs student I always wondered why some of my best cs professors didn't bother using the latest and greatest tooling. As I got older I realized, my time has actual monetary value, and spending time dealing with finicky software is often not worth it. Intellij is like ~ .50c / day, and I'm way more productive with it than I am with vim (and I say this as someone who used to mostly work with straight vim in college). Maybe you don't value your time?


People in academia are notoriously disconnected from real software development, so I wouldn't use them as an example of what to do or not do.

You can be judicious with your time without avoiding any inconvenience to accomplishing a goal. Inconvenience is how you find motivation and learn new things. For 50c a day you're robbing yourself of valuable knowledge and bettering open ecosystems that are themselves fulfilling ends. If you don't put a value on learning or contributing back then I don't see why'd you'd want to be in this profession in the first place.


You may not be able to think of a worse experience, but a lot of "newer" programmers may not even know what an LSP is. While it's true that I no longer need to rely on some of the benefits of Jetbrains, when I was getting started jetbrains paving over toolchain difficulties was invaluable.


Those were both build by some of the same people and they also share some code so it's a bit of a weird comparison.


If they were both built by the same person then the answer is obvious, just use the free and open source one without built in telemetry.


FYI the same guy wrote both.


I love the look and the idea, but I wonder if it will go the way of the small/budget phone?

Will folks revealed preference continue to be big and expensive?


One advantage they might have is that there isn't much on the market for low priced pickup trucks in general. I'd probably rather have a gas pickup than an electric but I don't want to pay the inflated prices that go along with them.


Agreed. The U.S. market had a very long run of both large expensive and small cheap pickup trucks, and people consistently have bought the big luxury pickups. It is why all the small trucks were axed to begin with. Even back in their prime, I saw many more F-150s than Rangers. Its an easy up-sell as I'm sure any car salesman will say: well for only a few more thousand you get into a full-size, and from there, add some options and its over.


>Even back in their prime, I saw many more F-150s than Rangers.

I think you're misremembering. The streets were flooded with Rangers and S10s back in the day. Full sized pickups have been the most popular class of vehicle for decades but that number is grossly inflated by the amount that are bought as fleet vehicles or work vehicles.


It doesn't matter why the number is inflated. If full sized trucks are most popular, then that's what you'll see more of. In any case, there are many people for whom a full-size pickup is their daily driver and not used for work. HN is constantly complaining about it, just not today apparently.


Looking at the Ranger, it sold 5.6M units between 1985-2005, its highest selling years.

F series Fords definitely outsold it, but is also a larger product line.


“It is why all the small trucks were axed to begin with.”

No, it is because emissions regulations. A small truck can’t be built on our emissions policies, not that there isn’t a market for one.


How is that? Aerodynamics due to the bed?


The economy (checks 401k) has changed though.


Good article. Funnily enough the throw away line "I don't see parentheses anymore". Is my greatest deterrent with lisp. It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.

If i programmed enough in lisp I think my brain would adjust to this, but it's almost like I can't full appreciate the language because it reads in the "wrong order".


> It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.

I’m not certain how true that really is. This:

    foo(bar(x), quux(y), z);
looks pretty much identical to:

    (foo (bar x) (quux y) z)
And of course if you want to assign them all to variables:

    int bar_x = bar(x);
    char quux_y = quux(y);
    
    return foo(bar_x, quux_y, z);
is pretty much the same as:

    (let ((bar-x (bar x))
          (quux-y (quux y)))
      (foo bar-x quux-y z))
FWIW, ‘per se’ comes from the Latin for ‘by itself.’


One of the awesome things about LISP is it encourages a developer to think of programs as an AST[0].

One of the things that sucks about LISP is - master it and every programming language is nothing more than an AST[0].

:-D

0 - https://en.wikipedia.org/wiki/Abstract_syntax_tree


> encourages a developer to think of programs as an AST

can you imagine saying something like

> The fradlis language encourages your average reader to think of essays as syntax [instead of content].

and thinking it reflects well on the language................


  can you imagine saying something like

  > The fradlis language encourages your average reader
  to think of essays as syntax [instead of content].

  and thinking it reflects well on the language
A reciprocating saw[0] is a great tool to have. It can be used to manipulate drywall, cut holes in various material, and generally allow "freehand cutting."

But it is not the right tool for making measured, repeatable, cuts. It is not the right tool for making perfect right-angle cuts, such as what is needed for framing walls.

In other words, use the right tool for the job.

If a problem is not best expressed with an AST mindset, LISP might not be the right tool for that job. But this is a statement about the job, not about the tool.

0 - https://en.wikipedia.org/wiki/Reciprocating_saw


I think an alternative to paragraphs or some other organizational unit would be a more appropriate analogy.

The AST aspect of Lisps is absolutely an advantage. It obviates the need for the vast majority of syntax and enables very easy metaprogramming.


The lisp is harder to read, for me. The first double paren is confusing.

    (let (bar-x (bar x))
         (quux-y (quux y)))
    (foo bar-x quux-y z)
Why is the second set of parens necessary?

The nesting makes sense to an interpreter, I'm sure, but it doesn't make sense to me.

Is each top-level set of parens a 'statement' that executes? Or does everything have to be embedded in a single list?

This is all semantics, but for my python-addled brain these are the things I get stuck on.


The let construct in Common Lisp and Scheme supports imperative programming, meaning that you have this:

  (let variable-bindings statment1 statement2 ... statementN)
If statementN is reached and evaluates to completion, then its value(s) will be the result value(s) of let.

The variable-bindings occupy one argument position in let. This argument position has to be a list, so we can have multiple variables:

  (let (...) ...)
Within the list we have about two design choices: just interleave the variables and their initializing expressions:

  (let (var1 value1
        var2 value2
        var3 value3)
    ...)

Or pair them together:

  (let ((var1 value1)
        (var2 value2)
        (var3 value3)
    ...)
There is some value in pairing them together in that if something is missing, you know what. Like where is the error here?

  (let (a b c d e) ...)
we can't tell at a glance which variable is missing its initializer.

Another aspect to this is that Common Lisp allows a variable binding to be expressed in three ways:

  var
  (var)
  (var init-form)
For instance

  (let (i j k (l) (m 9)) ...)
binds i, j and k to an initial value of nil, and m to 9.

Interleaved vars and initforms would make initforms mandatory. Which is not a bad thing.

Now suppose we have a form of let which evaluates only one expression (let variable-bindings expr), which is mandatory. Then there is no ambiguity; we know that the last item is the expr, and everything before that is variables. We can contemplate the following syntax:

  (let a 2 b 3 (+ a b)) -> 5
This is doable with a macro. If you would prefer to write your Lisp code like this, you can have that today and never look back. (Just don't call it let; pick another name like le!)

If I have to work with your code, I will grok that instantly and not have any problems.

In the wild, I've seen a let1 macro which binds one variable:

  (let1 var init-form statement1 statement2 ... statementn)


I am not a Lisp expert by any stretch, but let's clarify a few things:

1. Just for the sake of other readers, we agree that the code you quoted does not compile, right?

2. `let` is analogous to a scope in other languages (an extra set of {} in C), I like using it to keep my variables in the local scope.

3. `let` is structured much like other function calls. Here the first argument is a list of assignments, hence the first double parenthesis (you can declare without assigning,in which case the double parenthesis disappears since it's a list of variables, or `(variable value)` pairs).

4. The rest of the `let` arguments can be seen as the body of the scope, you can put any number of statements there. Usually these are function calls, so (func args) and it is parenthesis time again.

I get that the parenthesis can get confusing, especially at first. One adjusts quickly though, using proper indentation helps.

I mostly know lisp trough guix, and... SKILL, which is a proprietary derivative from Cadence, they added a few things like inline math, SI suffixes (I like that one), and... C "calling convention", which I just find weird: the compiler interprets foo(something) as (foo something). As I understand it, this just moves the opening parenthesis before the preceding word prior to evaluation, if there is no space before it.

I don't particularly like it, as that messes with my C instincts, respectively when it comes to spotting the scope. I find the syntax more convoluted with it, so harder to parse (not everything is a function, so parenthesis placement becomes arbitrary):

    let( (bar-x(bar(x))
         quux-y(quux(y)))
    foo(bar-x quux-y z)
    )


> Why is the second set of parens necessary?

it distinguishes the bindings from the body.

strictly speaking there's a more direct translation using `setq` which is more analogous to variable assignment in C/Python than the `let` binding, but `let` is idiomatic in lisps and closures in C/Python aren't really distinguished from functions.


You’re right!

    (let (bar-x quux-y)
      (setq bar-x (bar-x)
            quux-y (quux y))
      (foo bar-x quux-y z))
I just wouldn’t normally write it that way.


The code is written the same way it is logically structured. `let` takes 1+ arguments: a set of symbol bindings to values, and 0 or more additional statements which can use those symbols. In the example you are replying to, `bar-x` and `quux-y` are symbols whose values are set to the result of `(bar x)` and `(quux y)`. After the binding statement, additional statements can follow. If the bindings aren't kept together in a `[]` or `()` you can't tell them apart from the code within the `let`.


I prefer that to this (valid) C++ syntax:

  [](){}


You'd never actually write that, though. An empty lambda would be more concisely written as []{}, although even that is a rare case in real world code.


This reminds me of the terror that is the underbelly of JS.

https://jsfuck.com/


The tragedy of Lisp is that postfix-esque method notation just plain looks better, especially for people with the expectation of reading left-to-right.

    let bar_x = x.bar()
    let quux_y = y.quux()
    return (bar_x, quux_y, z).foo()


Looks better is subjective, but it has its advantages both for actual autocomplete - as soon as I hit the dot key my IDE can tell me the useful operations for the obejct - and also for "mental autocomplete" - I know exactly where to look to find useful operations on the particular object because they're organized "underneath" it in the conceptual hierarchy. In Lisps (or other languages/codebases that aren't structured in a non-OOP-ish way) this is often a pain point for me, especially when I'm first trying to make my way into some code/library.

As a bit of a digression:

The ML languages, as with most things, get this (mostly) right, in that by convention types are encapsulated in modules that know how to operate on them - although I can't help but think there ought to be more than convention enforcing that, at the language level.

There is the problem that it's unclear - if you can Frobnicate a Foo and a Baz together to make a Bar, is that an operation on Foos, on Bazes, or on Bars? Or maybe you want a separate Frobnicator to do it? (Pure) OOP languages force you to make an arbitrary choice, Lisp and co. just kind of shrug, the ML languages let you take your take your pick, for better or worse.


It's not really subjective because people have had the opportunity to program in the nested 'read from the inside out' style of lisp for 50 years and almost no one does it.


I think the cost of Lisp machines was the determining factor. Had it been ported to more operating systems earlier history could be different right now.


That was 40 years ago. If people wanted to program inside out with lots of nesting then unfold it in their head, they would have done it at some point a long time ago. It just isn't how people want to work.

People don't work in postfix notation either, even though it would be more direct to parse. What people feel is clearer is much more important.


It's not just Lisp, though. The prefix syntax was the original one when the concept of records/structs were first introduced in ALGOL-like languages - i.e. you'd have something like `name(manager(employee))` or `name OF manager OF employee`. Dot-syntax was introduced shortly after and very quickly won over.


De gustibus non disputandum est, I personally find the C++/Java/Rust/... style postfix notation (foo.bar()) to be appalling.


TXR Lisp has this notation, combined with Lisp parethesis placement.

Tather than obj.f(a, b). we have obj.(f a b).

  1> (defstruct dog ()
       (:method bark (self) (put-line "Woof!")))
  #<struct-type dog>
  2> (let ((d (new dog)))
       d.(bark))
  Woof!
  t
The dot notation is more restricted than in mainstream languages, and has a strict correspondence to underlying Lisp syntax, with read-print consistency.

  3> '(qref a b c (d) e f)
  a.b.c.(d).e.f
Cannot have a number in there; that won't go to dot notation:

  4> '(qref a b 3 (d) e f)
  (qref a b 3 (d)
    e f)
Chains of dot method calls work, by the way:

  1> (defstruct circular ()
       val
       (:method next (self) self))
  #<struct-type circular>
  2> (new circular val 42)
  #S(circular val 42)
  3> *2.(next).(next).(next).(next).val
  42
There must not be whitespace around the dot, though; you simply canot split this across lines. In other words:

   *2.(next)
   .(next) ;; nope!
   .(next) ;; what did I say?
The "null safe" dot is .? The following check obj for nil; if so, they yield nil rather than trying to access the object or call a method:

  obj.?slot
  obj.?(method arg ...)


And what about when `bar` takes several inputs? Postfix seems like an ugly hack that hyper-fixates on functions of a single argument to the detriment of everything else.


It's not like postfix replaces everything else. You can still do foo(bar, baz) where that makes the most sense.

However, experience shows that functions having one "special" argument that basically corresponds to grammatical subject in natural languages is such a common case that it makes sense for PLs to have syntactic sugar for it.


Look at the last line in the example, where I show a method being called on a tuple. Postfix syntax isn't limited to methods that take a single argument.


I think it really depends, in Common Lisp for example I don't think that's the case:

  (progn
    (do-something)
    (do-something-else)
    (do-a-third-thing))
The only case where it's a bit different and took some time for me to adjust was that adding bindings adds an indent level.

  (let ((a 12)
        (b 14))
    (do-something a)
    (do-something-else b)
    (setf b (do-third-thing a b)))
It's still mostly top-bottom, left to right. Clojure is quite a bit different, but it's not a property of lisps itself I'd say. I have a hard time coming up with examples usually so I'm open to examples of being wrong here.


Your example isn't a very functional code style though so I don't know that I'd consider it to be idiomatic. Generally code written in a functional style ends up indented many layers deep. Below is a quick (and quite tame) example from one of the introductory guides for Racket. My code often ends up much deeper. Consider what it would look like if one of the cond branches contained a nested cond.

  (define (start request)
    (define a-blog
      (cond [(can-parse-post? (request-bindings request))
             (cons (parse-post (request-bindings request))
                   BLOG)]
            [else
             BLOG]))
    (render-blog-page a-blog request))
https://docs.racket-lang.org/continue/index.html


Common Lisp, which is what I use, is not really a functional oriented language. I'd say the above is okay in CL.


I must have missed that memo. Sure it's remarkably flexible and simultaneously accommodates other approaches, but most of the code I see in the wild leans fairly heavily into a functional style. I posted a CL link in an adjacent comment.

Here's an example that mixes in a decent amount of procedural code that I'd consider idiomatic. https://github.com/ghollisjr/cl-ana/blob/master/hdf-table/hd...


It's easy enough to add -> (and related arrow operators) to Common Lisp as macros.

https://github.com/hipeta/arrow-macros

The common complaint that Common Lisp lacks some feature is often addressed by noting how easy it is to add that feature.


Besides arrow-macros there's also cl-arrows, which is basically exactly the same thing, and Serapeum also has arrow macros (though the -> macro in Serapeum is for type definitions, the Clojure-style arrow macro is hence relegated to ~>).


Been programming in Lisp for a while. The parents disappear very quickly. One trick to accelerate it is to use a good editor with structural editing (e.g., paredit in Emacs or something similar). All you editing is done on balanced expressions. When you type “(“, the editor automatically inserts “)” with your cursor right in between. If you try to delete a “)”, the editor ignores you until you delete everything inside and the “(“. Basically, you start editing at the expression level, not so much at the character or even line level. You just notice the indentation/shape of the code, but you never spend time counting parentheses or trying to balance anything. Everything is balanced all the time and you just write code.


> reading from right to left, bottom to top - from inside out

I don't understand why you think this. Can you give an example?



  (log (sqrt (sin (* 2 pi x)))


    log (sqrt (sin (2 * pi * x)))
Seems as much right to left to me as the original one. And just 2 deletions (you missed closing the opening parenthesis) and 2 insertions.


Right.

The ergonomic problem people face is that the chaining of functions appears in other contexts, like basic OOP.

Some kids trained on banana.monkey().vine().jungle() go into a tizzy when they see (jungle (vine (monkey banana)))).


Not sure of other lisps, but clojure has piping. I was under the impression in general that composing functions is pretty standard in FP. For example the above can be written:

    (-> (* 2 PI x) sin sqrt log)
Also while `comp` in clojure is right to left, it is easy to define one left to right. And if anything, it even uses less parentheses than the OOP example, O(1) vs O(n).


(2 * pi * x) sin sqrt log


I know Lisp since I read the little lister around 1996, and was an XEmacs user until around 2005.

The parenthesis do really disappear, just like the hieroglyphics on C influenced languages, it is a matter of habit.

At least it was for me.


To me any kind of deep nesting is an issue. It goes against the idea of reducing the amount of mental context window needed to understand something.

Plus, if syntax errors can easily take several minutes to fix, because if the syntax is wrong, auto format doesn't work right, and then you have to read a wall of text to find out where the missing close paren should have been.


> Good article. Funnily enough the throw away line "I don't see parentheses anymore". Is my greatest deterrent with lisp. It's not the parens persay, it's the fact that I'm used to reading up to down and left to right.

  Language shapes the way we think, and determines what we can think about.
  - Benjamin Lee Whorf[0]
From the comments in the post:

  Ask a C programmer to write factorial and you will likely
  get something like this (excuse the underbars, they are
  there because blogger doesn't format code in comments):

  int factorial (int x) {
      if (x == 0)
          return 1;
      else
          return x * factorial (x - 1);
  }

  And the Lisp programmer will give you:

  (defun factorial (x)
    (if (zerop x)
        1
        (* x (factorial (- x 1)))))
Let's see how we can get from the LISP version to something akin to the C version.

First, let's "modernize" the LISP version by replacing parentheses with "curly braces" and add some commas and newlines just for fun:

  {
    defun factorial { x },
    {
    if { zerop x },
      1 {
        *,
        x {
          factorial {
            - { x, 1 }
          }
        }
      }
    }
  }
This kinda looks like a JSON object. Let's make it into one and add some assumed labels while we're at it.

  {
    "defun" : {
      "factorial" : { "argument" : "x" },
      "body" : {
        "if" : { "zerop" : "x" },
        "then" : "1",
        "else" : {
          "*" : {
            "lhs" : "x",
            "rhs" : {
              "factorial" : {
                "-" : {
                  "lhs" : "x",
                  "rhs" : "1"
                }
              }
            }
          }
        }
      }
    }
  }
Now, if we replace "defun" with the return type, replace some of the curlies with parentheses, get rid of the labels we added, use infix operator notation, and not worry about it being a valid JSON object, we get:

    int
      factorial ( x )
        {
          if ( zerop ( x )  )
            1
          else
            x * factorial ( x - 1 )
        }
Reformat this a bit, add some C keywords and statement delimiters, and Bob's your uncle.

0 - https://www.goodreads.com/quotes/573737-language-shapes-the-...


Whorf was an idiot. It’s not worth quoting him.


> Whorf was an idiot. It’s not worth quoting him.

The citation is relevant to this topic, therefore use and attribution warranted.


It’s relevant, but it’s also wrong. It doesn’t help him make the case. But sure, make sure you attribute it to the correct idiot.


This is the most elementary hurdle a lisp programmer will face. You do indeed become adjusted to it quite quickly. I wouldn’t let this deter you from exploring something like Clojure more deeply.


per se


By definition if I knew how AI would change the world, I would invest/build things to that end. The fact that we still don't have a great AI product outside of chatgpt, shows that no one knows what will happen.


I think the author points to cultural changes in business comunication, which doesn't give any clues to what you should invest your money.


My great-grandmother and grandfather were in Leningrad during the siege. My great-grandmother continued to teach throughout. At some point she was given the option to evacuate with my (very) young grandfather over the "road of life".

As my mother tells the story, my great grandmother had the choice of either taking a bus, or hanging on to the back of some delivery truck. She chose the truck. The bus broke through the ice and disappeared under the water.

It's strange to realize how close one can be to not being "here" and how history weaves its way through your blood and ends up on the front page of hackernews.


Thanks for this personal story with the historical connection.

I would like to invite the audience to remember how many similar stories are being played out in the present day.


Required reading for this theme, and one of my personal favorite short stories:

https://qntm.org/lena


Humans tend to like to stay at there they are, in the tech context especially. It takes a long time relative to the length of a career to learn new systems, make new connections, and achieve some level of independence at a new job. There's also relatively few jobs that at least pretend to be somewhat beneficial to the world while paying somewhat competitive salaries (I can't think of any that I've worked at). Then there's relationships you may have developed with your colleagues.

Finally, if everyone just leaves their jobs without trying to improve them, won't everyone run out of places to jump to eventually?


I did a research project on this a while back - and when it comes to understanding deep network learning rate, regularization, hidden layer effects, and activations, I don't think anything is better than [this little web app](https://playground.tensorflow.org/#activation=tanh&batchSize...)


Article seems to make the argument that is should happen (poorly), but doesn't provide any evidence that it will happen.


Essentially automating project boilerplate that is custom enough to need attention, but not quite custom enough where it's interesting. Some examples include creating dockerfiles, various database models and data parsers, openapi specs, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: