Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Old school developers - achieving a lot with little (dodgycoder.net)
80 points by damian2000 on July 21, 2012 | hide | past | favorite | 40 comments


Well

Unit tests are great in js/python/ruby because of the nature of the language

In C/C++ for example it's much more involved. In java it's bearable because of Eclipse/Netbeans facilitating lots of things.

But take for example the Linux kernel development: - There are two widely used "IDEs": VIM and Emacs, pick one

- Patches sent over email (yeah, please try doing a pull request to see what happens)

- No unit tests

- limited use of debuging tools, mostly printk

And still is one of the most solid and used softwares out there

Tools and techniques (unit tests, CI, etc) are good if you want to have several developers "with their hands on the code at the same time" and you have limited trust on them


Patches sent over email (yeah, please try doing a pull request to see what happens)

Interesting, since git is used by kernel devs this can't just be a convenience thing. Why are they so adamant about putting the patch in the email?


Because you can directly review and discuss the changes inline without the need to leave your mua. There is also no need to setup a public git repository and check all your changes in your repository before pulling.

git format-patch, git send-email as well as git am work also really well and simplify the patch creation/integration. This is basically the send a pull request from github but without the need to depend on github.


I've worked in a couple of companies that did this. We had the following flow and it worked great:

Task assigned to developer via email, developer takes current release tar from ftp and untars, does work, creates patch, forwards patch to colleague to review, forwards to release manager who integrates all incoming patches, drops into a new tar, releases to ftp.

Some of this was automated with a few hundred lines of perl. The rest was on a whiteboard.


Interesting

They could skip the ftp part, because git saves bandwidth

Still, this is a good way to do the work


They use scp now rather than ftp. They don't use git because it's too complicated for contractors to handle.


I like unit tests and CI even with excellent developers who I trust utterly. We just have the computer automatically check everything we might check manually. Or that we worry that we might forget about.


On Ken Thompson:

  Regarding his programming style ... hardly ever uses unit tests, 
  starts his projects by designing the data structures and then works bottom up,
  with throwaway test stubs.
Sounds to me like he does use unit tests, but then throws them away after he's content that the code is working. I think this is what most people did for non-trivial code before unit testing became a culture.


Yup, I remember when I was learning Python, before I ever wrote any real tests, all my modules would be littered with:

  if arg[1] == 'test1':
    print func1(a)

  elif arg[1] == 'test2':
    print func1(b)

  elif arg[1] == 'test3':
    print func2(a)

  ...
Eventually I found it was way easier to stick this in a _test.py file :)


For the most part, the most important thing is to get things done. Sitting down and writing code. Nothing else. When you add all the fancy modern "methodologies" and tools, practices and whatnot then most, if not all of them just steer you away from the actual task of "sitting down and writing code". Of course, this is domain specific in a sense that agile methodologies aren't that suited for a sole developer, than they are for a team of developers or a group of teams. But in the end, the actual task of getting things done is just sitting down and writing code. Everything else is "wasted" time.

No matter how good heart rate monitors you have, how good shoes you have, how well nourished you are, how prepared you are, ... - you won't run your marathon any better if you aren't serious about the actual act of running. You can devise complex meditation practices and analyze your running stance and efficiency, but in the end it's the act of running which gets you through the 42195 meters of pain. The better you are at it, the faster you finish. Equipment helps, but only to a small degree.

Focus on working, getting stuff done.


Writing code is a straightforward task. What takes days or weeks is scanning and deciphering someone else's buggy digital hairball (and those programmers also thought they were "getting things done" when they "finished" their unmaintainable messes).

I'm not a huge fan of IDEs or complex processes, but many tools do help. A debugger, static analyzer, and heck even 'grep', go a long way; really, anything that looks at the code is going to do some good. On the other hand, I find it useless to have processes that distort reality (e.g. pretty UML documents about what the code might do, if only someone had written the code that way).


grep is the single most useful tool in my aresenal. With it, I can can take the source code of a library I have never looked up before, and immidietly find the function I want to know about, if I'm feeling nice, I might even submit a documentation update.


Thats just the attitude that put me out of running shape for two months, I ran to much, to fast, to soon, and get a stress fracture, and kept running on it until I got a worse stress fracture. The most difficult part of running is knowing how much to do in order to get better, without doing so much that you get worse. The same logic works in programmning, investing time so you know what you want to write not only lets you write it faster, but results in cleaner code that is easier to use, maintain, and likely has fewer bugs.


Yes, I've experienced the same when I began running. This is equivalent to a junior developer spouting out code without thinking. It's not only the act of actually physically typing the code, but the thought process involved - juniors are not able to think before they type. I'd argue that people like in the article have written more code, have spent more thinking, debugging and maintaining code than a huge, huge majority of "modern day hackers" ever will, in 21st century. They focused at the task at hand, and became extraordinary good at it.


I think you are most effective with the tools you have been using the longest which are probably the ones you started using first.

I started with things like QuickBasic and PHP, so debugging by print has always felt natural to me.

This is a problem with things like Java that were clearly not designed around this paradigm where you should really be using a debugger or setting up log4J or something.


Totally agree. Over time, everyone develops a particular style of working with their favourite editor/tools and it becomes very difficult to move away after a point.

I worked with Java on Eclipse for 8 years, and I am taking a self-imposed break from it and working with Java on VIM for the past 2 months. There are benefits to both approaches (editor vs. IDE) but the main thing I notice is that I am far less productive with VIM (because I am unfamiliar with it) and I frequently fight back a strong urge to go back to eclipse (for debugging, refactoring and object relation checking/mapping). I guess this is the kind of thing that keeps making everyone go back to their favourite IDE/editor.


As someone who writes Java using VIM for my job, http://eclim.org/ has proven invaluable. I suggest taking a look at it if you haven't already.


Although rxon are from the emacs side, Armstrong has a pet-project for an IDE, that keeps a kind of browsing history related to the code. He noticed, that many lookups happen before a few lines are written. And after they are written, the "why?" is lost, i.e. how the author actually got to the given lines needed. Hence the lookup history for the web searches.


There was no such thing as TDD programming when I learned to code. Same with these old guys. There also were no interactive debuggers (or at least not like the new ones).

Anyway, I used to use the interactive debuggers in Visual Basic and Visual Studio, but for the web its easier to just be in an ssh terminal and so I just use logs.

If you want people to use TDD, then they should start learning it close to when they start programming, and I think that will make it much easier and more natural for them to do.

I have done some TDD, but not very much because it always feels like extra work. What I am used to doing these days is sort of small feature test programs when I need to and its convenient or just running the application and looking at logs. I think if I had learned to program with TDD it would be a lot easier.


When designing software he prefers to rigorously document as much as possible up front before starting to write code...

This was really good for me to read. Coding is much more fun than planning, so I get pulled into the trap of staring at my screen while thinking things through, trying something, then approaching a better solution. I am trying to make myself create a clear plan on paper before turning my computer on. Only when I have a clear approach outlined on paper do I get to turn my computer on.

It's less "fun" sometimes, but it's much more satisfying.


A computer scientist called Alick Glennie worked alongside Alan Turing on the Manchester Mark 1 computer.

Glennie developed Autocode, a "simplified coding system" - what many regard as the first real programming language. The story goes that when Glennie developed the first compiler for Autocode, Turing was furious that precious computer time was being wasted on such a task. Turing's mind was supposedly so brilliant that to him, the task of implementing a program in machine code was mere admin, something requiring no more skill than making up a punchcard. It simply did not occur to him that other people might consider computer programming to be complex enough to be a vocation in itself.

For once-in-a-generation geniuses, no safety net is necessary. The rest of us need to be protected from our own incompetence.


Agile is about working with large numbers of people on code that's constantly changing. The examples you use seem like code that gets written once or as part of an individual project. Unit tests gain most when they get run multiple times by different people over the life of a project.


Is it? I've only been doing it since 2000, so maybe I've got it wrong. But I thought it was about working more efficiently and adaptably by tuning our processes to embrace change rather than trying to fend it off.


I'm not sure I agree. I've watched large numbers of people crash and burn running 'agile' processes.

Agile doesn't work at all with large numbers of people on code that is constantly changing. Regardless of how you package it, the only way to achieve coherency and scalability of product development is through extensive planning, solid architecture and loose coupling of components.

Agile throws those three concepts out of the window for time to market. Sure your first few iterations will survive this, but as your product grows, so will coupling logarithmically. This eventually cripples you.


I certainly agree that a lot of "agile" projects are clusterfucks, especially large ones. Of course, that's true of all software projects. And a lot of what people sell as "agile" is bullshit. So I'm not sure how much that proves.

I also agree that well-run agile approaches throw big up-front design out. But I think they can happily achieve solid architecture and loose coupling.

There's nothing you can achieve with up-front planning that you can't achieve by refactoring your design after a release. The main differences are that you need some supporting practices to make refactoring economical, and that you have much more information available to you after release than you do before-hand.


Well actually you're wrong on the following point:

There's nothing you can achieve with up-front planning that you can't achieve by refactoring your design after a release

If your application is relatively standalone then yes, but if you have heavy APIs and integration (which value adding applications usually do), you're up shit creek.


Depends on what sort of API you mean. Internal ones are fine, so I suppose you're talking about public APIs. Which again are fine on the client side; it's just the server that can be harder.

But I still think the way to good public server APIs isn't to sit in one's arctic Fortress of Architecture and think real hard. I think you just build and iterate in private, refactoring as you go, and then switch to a closed beta. And of course build your protocol in such a way that it's reasonably extensible.

Up-front planing is still no panacea. You will have to change your protocol someday. Someday soon if you're up to something interesting, because the world doesn't stand still. And even if it does, your competitors won't.


I agree with the overall thrust, though the printf debugging technique is much more suitable for a relatively deterministic program like a compiler or command line tool than for many interesting programs like OSes and network services.


Actually, it's the other way around. Logging [which is pretty much what printf debugging is] is much more sustainable for long-running nondeterministic programs. It's far easier to look at a log and say "After 3 hours, this program started giving bad responses for these specific inputs".

Debuggers are great when you know where to attach and what to look at, but finding that information out is far easier to do from a log.


I don't like this title, because emacs is not "with little". Nor is the command-line interface. These lightweight development tools took a lot of work to get right. Creating a modern keyboard-only (mouse breaks flow) development environment that works is no small accomplishment.

My tooling preference is evolving a bit. I used to hate IDEs, because I associated them with the (thank God it failed) attempt to commoditize programmer talent (starting in the 1990s with VB) that hijacked "object-oriented programming" and led to 21st-century spaghetti code. On the other hand, IDEs can actually be damn useful. You're in for hours of misery if you try to do Java development without an IDE, and IDEs are better adapted to one reality of software development: that most professional programmers spend more time trying to figure out other peoples' code than reading their own. The click-and-navigate capability (that automagically takes you to the file and line where a method or class is defined) is valuable.

Something Google had that I liked a lot is a code-navigation tool, much like a read-only IDE, served on the web, for its codebase. Very useful.

IDEs also help you navigate the dependencies and cruft-sprawl that are pretty much unavoidable on large Java projects. If you're on-boarding into an existing Java project and want to be productive in your first week, I think using an IDE is the only answer.

For my own projects (where I'll often use Scala but never Java) I still use emacs and probably always will, but if I'm working in a 100kLoC codebase of mixed Scala and Java, using a build and version-control system I'm not familiar with, I'm going to use the IDE, at least when I start out. IDEs aren't perfect: a lot of the things they hide or for which they automagically do the work (build system, version control) are things a programmer will actually want to understand at some point. But it's nice to be able to be productive (in a software environment you didn't create) on the first week.

Debuggers are another area where I'm coming to learn that printlining isn't always enough. The name of the game in debugging is not to break flow. Pinging about files inserting printlines (trying to figure out where to put them, and how to handle loops where you strictly do not want a printline for each time you're in the loop) can involve too much orthogonal thinking and context-switching to be productive, so an interactive debugger like what Lisp has or what Eclipse offers can be a godsend. Often printlining is the best solution because you know what you want to see in the execution (it's not "ad-hoc" debugging) but sometimes you don't. Once you start having to think about how to printline something, you're better off with an interactive debugger.

Unit tests, for me, are about not breaking flow. I'd rather do the debugging while the problem is fresh in my mind than possibly months later. I'm going to "REPL-test" the thing in any case, and I might as well turn that session into something persistent. I certainly won't remember, 3 months later, how comprehensive my REPL testing was. If I have unit tests, I can check and see what's covered and what isn't.

I don't write unit tests because I'm a nice guy. I do it because I'd rather spend an additional 20 minutes writing tests-- plus whatever debug time happens on account of bugs found, but that's strictly less time than it would cost me or someone else to debug it later-- than have to deal with a context switch weeks or months later to fix the damn thing.

As for Agile and Scrum, I think that stuff is mostly dopey pixie dust that's seems progressive because it's shiny and new, but most of it's neo-Taylorism, and Taylorism didn't fucking work the first time around. "Agile" is too ill-defined to mean much anymore. As for Scrum, structuring time into "iterations" is stupid (sure, you can call a 17-day period an "iteration" just as you can call an oblique trapezoid a "skwirk" and the class of mammals over 250 pounds "mforzas" but it doesn't mean anything), and most of this agile neo-management stuff can become a bog if (through benign neglect) it re-devolves through benign neglect into normal human behavior.

One example is stand-ups. If you're going to have frequent status meetings, stand-ups are the best way to do it. People tune out in meetings of more than 5 people except to communication directly affecting their work, so there isn't much learned in them. Stand-ups exist to encourage short status meetings rather than sprawling, boring slogs, so they're a useful innovation... when people actually stand the fuck up, If people treat it as a stand-up meeting (i.e. show up on time, actually stand, and only address issues that are directly blocking work) it can be useful, but if not, it just devolves.

The real purpose of stand-ups, by the way, is to prevent the issue where people delay communication on things they need out of fear or apprehension (that is, they're afraid to ask for what they need from people) and linger around being blocked on their main project, instead doing low-priority work. The purpose of standups is to allow people to say, objectively, "this issue is blocking me" and allow group pressure (rather than management fiat) to encourage resolution. It gives the blocked person an opportunity to state a blocking issue objectively without pointing a finger and the group pressure encourages people who have the power to resolve the blocker (who now not only know the blocker exists, but know that everyone else knows that they know, i.e. "common knowledge" in the modal-logic sense) to fix it. Without something like standups to force this, people with poor communication styles or avoidant personalities can end up lingering on low-priority work for weeks. So standups can be legitimately useful. On the other hand, most of this agile "magic sauce" is neo-managerial bullshit that quickly devolves into old-style bureaucratic muck.

For example, a lot of these "agile" startups that have standups have half the people not even standing up. It can creep into an hour-long sit-down meeting... except now it happens every day instead of once every 2 weeks. So now you're having 60+ minute long, boring status meetings every day, and productivity goes way down.


If you're on-boarding into an existing Java project and want to be productive in your first week, I think using an IDE is the only answer.

I would argue if you're using Java, using an IDE is the only way to stay sane. Context aware class navigation is pretty much a requirement on any sizable project. Intellij can make Java almost enjoyable to use.

Your comments on agile and standups are spot on. Where I work now we do real standups. The group makes everyone actually standup and keeps the people who can wander during their turn on topic. The meeting is also strictly 15 minutes or less. It's the best daily type meeting I've ever had at any job and it works pretty well.


I think there's a plus and a minus to the class navigation features of IDEs. Undoubtedly they make it faster to find your way through a thread of code, helping you to avoid breaking concentration and flow. However, I also observe that many of my colleagues who exclusively use an IDE never actually learn the structure of the codebase. Starting in on a bug or a task from scratch, they don't have the instinctive knowledge of what file & method to go to straight away to start working on the problem which I feel like I've gained via time spent navigating the codebase more manually.


> I would argue if you're using Java, using an IDE is the only way to stay sane. Context aware class navigation is pretty much a requirement on any sizable project.

This applies to any language, not just Java.


It's not so much the language as the practices that have developed around it, such as the "everything happens somewhere else" principle of strong OOP. Which has some advantages, but means that you'll be flipping through lots of classes to trace most code paths.


If you have a large codebase written by several generations of developers you'll get this. Even in languages without OO support.


Java... I think IDE + Java has the "four-wheel drive" problem of getting you stuck in a more inaccessible place. There are a lot of ad-hoc software lifecycle structures that are so unnecessarily complex they can only be used with IDEs, that exist because IDEs let people get to that level of complexity.

I don't like Java even with an IDE, and I especially dislike Maven, but Scala and Clojure are two of the most exciting languages out there, and because they're showing an ability to build communities that Haskell and Ocaml seem to lack (not a fault of the languages, but just because it's really hard to get people to learn "new everything") and so they're probably going to either be "the winners" when the divergence/convergence cycle of PL moves back into a convergent phase, or predecessors of whatever wins. Scala and Clojure are probably the "best bets" right now. The "language of 2020" won't be Java or Ocaml or SBCL, but Scala has a fighting chance.

On standups: if you're going to do a daily status meeting, then standup is the way to go.

That said, I dislike status meetings because I think they burn a lot of time, involve too much context switching, and don't solve problems that are better solved by direct communication. Also, audit-cycles don't have a one-size-fits-all pattern. You might want to ping an entry-level programmer daily in the first six months and 1-2x per week after that, while for a more senior person, a high-level might be appropriate.

One thing I will say is that a formalized status-reporting infrastructure (even one that costs a half hour per day) is much better than ad-hoc micromanagement. I worked at a company where 2-4 daily status pings (that would evolve into half-hour detailed conversations about minutiae) was the norm and that was a massive failure.


> I associated them with the (thank God it failed) attempt to commoditize programmer talent

Why do you say it failed?

I do consulting for Fortune 500 companies, and I am pretty convinced it was successful, based on the majority of guys we get assigned to our projects.


You've seen a project you'd call a success?

I'm kidding, sort of. I haven't seen the inside of a Fortune 500 company in years; I'm in startup-land now. But from what reports I hear, I imagine it's the same. Lots of "commodity" programmers driving the costs up an order of magnitude beyond what more talented programmers consider reasonable for the functionality. Then they pay a few brilliant people, often consultants, to drag them across arbitrarily designated finish line.

Meanwhile, startups, small companies, and the few large companies who take engineering seriously are bidding up the rest of the talented programmers. With cash, with freedom, with the ability to actually make something they aren't depressed by.


Yes, I would say that you described the situation pretty well.

The problem is that even medium sized companies are following suite. My problem while looking for jobs is that it is becoming the same everywhere.


Debuggers are also great on platforms without the modern conveniences such as memory protection, i.e. microcontrollers. Segfaults are annoying, but at least they are clear- mess up on a machine with no memory protection, and you start executing random memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: