Hacker News new | past | comments | ask | show | jobs | submit login

Honestly, just about anything is going to be faster, productivity-wise, than C++. When you stop having to think about how you're going to structure your inheritance and classes, you--shockingly--get things done. But more metaprogramming is not the answer. And I've used Scheme since 2002 or so. I'm implemented Scheme interpreters and compilers. But I'm no longer a cheerleader for macros and call/cc or Scheme (or Lisp, for that matter).

You know what gets things done and makes things easy to maintain? Boring ass code. IF statements. FOR loops. I mostly use Perl today. It doesn't get in the way. But getting things done is not trendy. That's where we are today.




You know what's the problem with boring code? It's boring. This means its information content is low, and its abstraction level is low. This means that you need more of it to express an algorithm.

When you have a lot of wordy, boring code to maintain, you have to make coordinated changes in more similarly boring places. A human's brain can only keep that many lines of context. So it becomes easier to make a mistake.

I understand that abstraction astronautics can leave you with puzzling, convoluted, hard-to-maintain code full of leaky unintuitive abstractions. This problem is not unique to Lisp macros; languages like C++ and even Java are known to be widely used by perpetrators of the above-mentioned atrocities.

What makes code easier to maintain is clear separation of concerns and low impedance between code's abstractions and the subject area. This is, again, attainable in a number of languages (though expressive power and minimalism help make it even nicer), given the right mindset and skills. I suppose John Carmack possesses both.


I recently watch a colleague write an elaborate system to parse a few different CSV feeds. There are a dozen different interfaces and mixed in with all the lovely Java design patterns.

I'm beginning to use a phrase that I'd rather deal with poorly written code than well planned architecture. Obviously by well planned architecture I'm referring to overly architected solutions.


When it comes to architecture, I've lately turned towards BCNF as my god. My premise is that if my data model - my internal application data, not just "the database" - is as in as normalized a form as I can reasonably get it given typical constraints of procedural/OO/functional styles, my features automatically grow into a flexible and decoupled grain because they're operating on exactly the right slice of data, no more, no less. "Guess and check" and "OO design pattern" strategies don't seem to get me there because they tend to start with whatever is language-easy or looks pretty at first glance, and then take on the problems later. And it seems to work - the thing I have right now is, indeed, incredibly flexible for the amount of code involved. And it isn't really "architected" in the usual sense otherwise - there are no grand plans.

The only problem I'm having with this tack is that it reveals all the technical debt at once, which produces an enormous amount of pain early on. My friends smirked at my woes today of trying to make a clickable button, which has to piece together stuff from the graphics layer, input events, text fields, and internal button state. An enormous variety of data, altogether, with the debt usually hidden from view at some level. It all makes sense, it's all decoupled, the lifetime of the state is automatically managed, any configuration you want will just be a matter of making the data for it. But making that first button is quite a headache.


I had a funny feeling doing a SQL MOOC when I had to re-learn normalization, and how it was a very generic decoupling algorithm. Suddenly all OOP became tiny and ad-hoc.


Would you mind telling me which MOOC you did for SQL? I am quite rusty - (~10 years since I did any serious SQL stuff) but I am finding it is coming up quite a lot now for me.


IIRC it was Stanford's (I have memories of a mainly red interface)

http://www.erictimmons.com/node/18

I don't know if it qualifies for serious, I'd say challenging enough, but it was great to revisit with another university material.


Thanks for that. In case anyone else is interested, they now have it set up as a self-paced course here:

https://class.stanford.edu/courses/DB/2014/SelfPaced/about


A well-planned architecture is often the smallest thing that works correctly. A well-architected car is unlikely to have 37 wheels (though a poorly-built one might).

You know, the ideal device is that which is not even there, but its function gets executed. This ideal is rarely attainable, but it's something to crave for.


Very much this. Refactoring simple code to handle more complicated situation as it develops is so much better than pre-engineering for possibilities.


This! I joined a new company recently to build out the systems. Instead of trying to predict the future and build for it, I just went ahead and built a bare minimum architecture and used TDD for it while doing so. The start was a little slow, but now when I get requests to change things entirely (eg - an entire segment of logic was requested to be shifted into the database for an administrator to manage its behaviour), I get it done fairly quick.

On a side note... Uncle Bob is my hero.


You know what's the problem with boring code? It's boring. This means its information content is low, and its abstraction level is low. This means that you need more of it to express an algorithm.

When you have a lot of wordy, boring code to maintain, you have to make coordinated changes in more similarly boring places. A human's brain can only keep that many lines of context. So it becomes easier to make a mistake.

A problem nicely summarized by Yaron Minsky (of Jane Street): "You can’t pay people enough to carefully debug boring boilerplate code. I’ve tried."


You know what's the problem with boring code? It's boring. This means its information content is low, and its abstraction level is low. This means that you need more of it to express an algorithm.

Code may also be boring simply because it is unsurprising for someone familiar with the subject matter.


"You know what gets things done and makes things easy to maintain? Boring ass code. IF statements. FOR loops." I think you are channeling some of the Go philosophy there :).


Or C philosophy perhaps?


The C philosophy is: an easier alternative to writing assembly.


"Go philosophy" apparently being an absolutist and unwavering belief in One True Way To Actually Get Stuff Done.

It's surprising that a philosophy that tries to promote simplicity also manages to come across as so elitist at the same time.


If there were multiple accepted styles, the code wouldn't be as boring. But it is, and to some of us, that's a good thing.


Who's talking about code style? Some Go users like to talk about Go as if anyone who doesn't like it just dosn't "get it", or that they obviously don't appreciate Getting Things Done.


Racket is quite a lot more than call/cc and macros.

By the way, FOR loops invite off-by-one errors and worse. Use a combinator like map, or filter or foldr etc, to keep things boring.


There are also the extremely "boring" & practical list comprehensions / iterators within racket itself:

http://docs.racket-lang.org/reference/for.html

eg. fizzbuzz using for, match:

    -> (for ([i (range 1 16)])
        (match (list (modulo i 3) (modulo i 5))
          [(list 0 0) (displayln "fizzbuzz")]
          [(list 0 _) (displayln "fizz")]
          [(list _ 0) (displayln "buzz")]
          [_          (displayln i)]))
    1
    2
    fizz
    4
    buzz
    fizz
    7
    8
    fizz
    buzz
    11
    fizz
    13
    14
    fizzbuzz


In for, in-range is faster.

http://docs.racket-lang.org/reference/sequences.html?q=in-ra...

An in-range application can provide better performance for number iteration when it appears directly in a for clause.

    -> (for ([i (in-range 1 16)])
        (match (list (modulo i 3) (modulo i 5))
          [(list 0 0) (displayln "fizzbuzz")]
          [(list 0 _) (displayln "fizz")]
          [(list _ 0) (displayln "buzz")]
          [_          (displayln i)]))


Oh, thanks for that :)


For loops in perl have two styles. The first is the c-style loop, the other is a map. One often uses the latter style a lot more often.


That, and Perl has an actual `map` function too:

  map { $_ + 1 } (@list);


Sometimes Perl can be a beauty:

  sub sum_of_squared_pairs {
    reduce { $a + $b } map { $_ * $_ } grep { $_ % 2 == 0 } @_
  }

  sub schwartzian_transform {
    map  { $_->[0] }
    sort { $a->[1] <=> $b->[1] } # use numeric comparison
    map  { [$_, length $_] }     # calculate the length of the string
         @_
  }


Yes, but _ has dynamic scope (I believe). That's very dangerous in general.


Carmack seems to go through his FP phase. A phase that many programmers go through in their younger years (Carmack must have missed it, because he was occupied with Keen, Wolfenstein, Doom and Quake at that time), when they read SICP, learn Scheme, ML, etc. before the novelty wears off and they come back to plain old imperative, mutable programming.


What you're not seeing is the successes who acheive escape velocity. They don't come back, ever.


Ha. JWZ of XEmacs and Lucid fame nowadays uses Perl for the little tools he writes. I hate to admit it, but I think you're right.


I came back to Perl about 18 months ago, after working as a sysadmin for a couple of months and being fed up with Python's unicode handling (Python 2.x, I haven't given Python 3.x a try, yet).

I do not think Perl is a pretty language, but I have come to appreciate how useful it is. If all you want is a smallish application (roughly, less than 1 KLOC), especially if you're only going to use it once or maybe a handful of times, no other language I have met can keep up.

And for the kind of problem I typically use Perl for - reading, say, a CSV file or an Excel spreadsheet, filtering the data according to some criterion, fetching and adding data from an external source, say, an LDAP directory or a relational database, then inserting the result into a database or emitting another CSV file - it is also surprisingly hard to beat Perl's runtime performance, especially its regex engine. I'm not saying it can't be done, but for a program you're essentially throwing away after a week or so, it's usually not worth the hassle.


Honestly, just about anything is going to be faster, productivity-wise, than C++.

I don't think that's been true for a while. Boost went a long way towards making C++ much more productive, and now that's gone even further with C++11 and 14.


Compile times are still terrible, there's still a terribly high number of causes for undefined behavior that will burn through your time in debugging sessions, and there's terribly far to go still before it's feature list catches up to "just about anything" (albeit a slightly smaller set this time around.)

Things are improving in C++-land, but I'd still place it near last.


> Honestly, just about anything is going to be faster, productivity-wise, than C++.

I can only assume you have not seen 300+ deep stack traces in lasagna java programs. Or GWT.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: