Hacker News new | past | comments | ask | show | jobs | submit login
My bag of tricks – loose notes, design patterns, rules-of-thumb (gordonbrander.com)
412 points by necrodome on Aug 31, 2018 | hide | past | favorite | 79 comments



I'm regretting I didn't collect my thoughts like this earlier in my career. I personally learned and internalized many principles but by the time I was tech leading and mentoring, they had all fallen to the level of intuition and I couldn't clearly explain the "why"s of things.


George Polya (excuse spelling) wrote book called How To Solve It, it teaches heuristic methods in mainly mathematical problem solving, but the real value comes from whem someone, somewhat rigorously walks you through methods you implicitly know and use.

Anyone feeling honest resonance with your words should read the book. I can't put it in other words.


It's a great book (I'm just about to finish it). But let's not forget that the goal is to internalize problem-solving heuristics so that you use them intuitively.

In fact, the book itself has two audiences - the problem solver, and the teacher of future problem solvers (with the suggestion that if you're the former, you can also be the latter by talking to yourself). So the book gives you both a bunch of stuff to internalize, and a discussion on how to teach that stuff to other people.

--

Tangentially, reading this is making me do some serious soul searching about the way I approach programming. For instance, Polya emphasizes reviewing your work as an important step in solving a mathematical problem - not (just) to re-check correctness, but to spend some time thinking about the structure of your solution, and learn something from it. This is something I just realized I rarely do in programming - I don't review my code to learn more from the very solution I just implemented. I usually just commit and continue to the next thing. This is a thing I'm going to work on changing.


The hard part of this is that most companies aren't willing to support it (monetarily). You're expected to spend most of your time coding or, at best, thinking about the next problem. Thoughtul reflection rarely gets the support it ought to.

That said, I think it's worth investing your own time in doing it. I feel a lot more at ease when I take time to think about my work, and I think it pays off in my own career in the long run, even if it costs me some of my own time.


I do typically review my code just before I commit it as I write the commit message, but it's not much of a high-level overview.


It's also such a good book for mentoring and teaching. It's really important to internalize the difference between giving someone the answer and giving someone the tools to find the answer.


Yes. And the book goes even deeper, discussing the "how" of giving tools - how to give them while preserving interest, sense of achievement, and maximizing the amount of discovery the student is doing themselves.


That's a personal wiki. I think it's invaluable, and since I started building mine with org-mode I feel much more accomplished. That's because some tasks' natural outcome is a knowledge bit, which you can't store otherwise.

In the old times, many Germanic scholars would build a similar thing using flashcards. It was called a zettelkasten.


Thanks for providing the term. I was surprised to find a well-organized site on this topic: https://zettelkasten.de/posts/zettelkasten-improves-thinking...


There's a blog by a (German?) scholar that covers all kinds of Zettelkastens extensively. Apparently the author is heavily inspired by the method of the german sociologist Niklas Luhmann:

https://takingnotenow.blogspot.com/2007/12/luhmanns-zettelka...

https://takingnotenow.blogspot.com/2016/09/luhmanns-zettelka...

https://takingnotenow.blogspot.com/2016/08/my-translations-o...

And, just for quick reference, his entire 'zettelkasten' tag:

https://takingnotenow.blogspot.com/search/label/Zettelkasten


I've recently came across this method. This seems to be a nice minimalistic implementation of the zettelkasten method: https://github.com/renerocksai/sublimeless_zk

Also some other wikis have similar principles, for example: https://tiddlywiki.com/static/Philosophy%2520of%2520Tiddlers...


How are you using org-mode for this?

I have recently switched all my note-taking and task/todo management to org-mode, but I'm still figuring out how to use it properly. At the moment, I just have a personal.org, refile.org and project specific .org files, all kinda taking the shape of headings with sub-sub-sub-sub-headings inside until there is either none or some content inside, but a personal wiki, it doesn't feel like. It feels more like ordered mess, kinda. It's definitely better than anything I've ever used before.

Any good tips you want to share?


That's not an easy question. I think you need to enforce some style guide, like Wikipedia does.

org-mode is a bit like C++ or LaTeX. I say this as a good thing. It has tons of features, but you need to choose what to use and what to keep out.

For me, something that works great is to keep all org files in the same folder, with a flat structure. That includes my task list and my calendar. I version control everything, except organizational things that change often instead of growing like articles. I also keep out of version control first quick drafts of any article.


Thanks for the reply!


It’s nice to have a personal man page folder on different subjects. I personally use a binding in Emacs to bring up a helm buffer to select it quickly. Also, a tab completion function tied to an alias for it called “mm” so you can access it from the command line too is useful.


Org-wiki and linking between pages is a godsend. Add a keybinding to search pages using helm and you’re up and running.


I guess I should check out helm since it's been recommended a couple of times now!


I don't think you really need Helm explicitly for this (though you could make use of it if you're using it for other things).

The standard Emacs function for prompting for data with autocomplete is completing-read. You could build an elisp function that prompts for the name of a wiki file with completing-read, and then visits it. The autocompletion will be provided by whatever package you have configured globally - be it Helm, or Ivy, or something else.

A quick example how I started using this feature - I have my half-done invoicing system (for turning time clocked on org-mode tasks into PDF invoices). An entry point to invoicing a customer may look like this:

  (defun invoice-customer (customer-name)
    (interactive (customers--completing-read-customer))
    ;; TODO Create an org document containing the invoice table and link.
    (message "Invoicing %s" customer-name))
Note that the "interactive" declaration has a function call in it. Now the completing-read call:

  (defun customers--completing-read-customer ()
    "To be used as a value of `INTERACTIVE' declaration for functions that need to read a customer value."
    (list (completing-read "Customer: " (mapcar #'car *customers/customers*) nil t nil nil *customers/current-customer*)))
There's a bunch of parameters to completing-read, but at a glance you can see the prompt ("Customer: "), code generating a list of entries (I'm doing a mapcar over customer database; you might want to do a mapcar over a call to directory-files), and an optional default value.

Or, instead of all that, you can do what I actually do for the wiki - bookmark the wiki folder itself (with C-x r m in a dired buffer), and use C-x r b foldername to jump to it, and use incremental search in the resulting dired buffer to find the entry you need.


Helm is great. Others prefer Ivy, but I prefer the way Helm works.


Yes It is a personal wiki I started mine after reading Pragmatic Thinking recently, I have some examples from others too in my wiki https://github.com/hrnn/wiki/wiki#personal-wikis-examples


Another great list of personal knowledge bases I found by clicking through your links:

https://github.com/RichardLitt/meta-knowledge


I built mine with... Google Docs because I wanted something simple that was accessible with all my devices.

Knowledge pays compound interest: the more you know, the easier it is to learn new stuff (more branches on the semantic tree). Just jotting down simple unconnected facts and organizing them later has helped me become a much competent person.


I did something similar. Mine is a bunch of text files stored in Dropbox.


Similar for me - a bunch of text files (org mode) in a Dropbox folder.

I'm thinking about publishing a subset of it, but at this point it's one big mix of general knowledge and pretty personal stuff.


I think that what you're describing there is known as the curse of knowledge

https://en.wikipedia.org/wiki/Curse_of_knowledge


Ma [0] was also referenced by Alan Kay when talking about smalltalk messages [1]:

> The big idea is "messaging" - that is what the kernal of Smalltalk/Squeak is all about (and it's something that was never quite completed in our Xerox PARC phase). The Japanese have a small word - ma - for "that which is in between" - perhaps the nearest English equivalent is "interstitial".

[0]: http://gordonbrander.com/pattern/ma/

[1]: http://wiki.c2.com/?AlanKayOnMessaging


Every game with more than one player becomes a game about the interactions between those players.

Bill Kerr once said that point of view is worth 80 IQ points. I think the above statement is a good example.

It's simple enough to seem too trivial to write down. Yet years of work and failure could stand behind it. All too easy for beginners to ignore and trace the same frustrating path.


> Not Rocket Science Rule

I've been trying to introduce this into my current company for as long as I've been working there, with no success so far, with varying reasons:

- Unit tests for existing projects often fail - Unit tests for existing projects take several minutes to run - Our current build process cannot be automated.

Quite frustrating, especially since I've seen a lot of cases where it could've saved us from near-disasters.

EDIT: I think on the subject of Emergence, the writer has missed one key point: emergence requires interplay between different levels of the system.


> Unit tests for existing projects often fail - Unit tests for existing projects take several minutes to run - Our current build process cannot be automated.

Same here. Except we have no almost no unit tests because

1) Most of the devs don't know how to write unit tests 2) Most of the devs don't know hot to write code that is testable, even if they could write the actual unit tests.

About a year or so ago we had a project that got a few unit tests added to it, within days they were broken nad failing. The dev just added a [TestCategory("Blah")] and excluded them from being run...


> Except we have no almost no unit tests because

> 1) Most of the devs don't know how to write unit tests

> 2) Most of the devs don't know hot to write code that is testable, even if they could write the actual unit tests.

Sounds really familiar... I'd like to add the following one:

> 3) Cannot make the distinction between code that needs unit testsand code that does not need unit tests


I'd like to add:

4) Don't know how to mock effectively

I've seen three flavours of this one:

- tests that take forever because they don't mock a slow external process that gets called 100s of times in the suite.

- tests that randomly break because someone left a file in /tmp/ or some other lack-of-isolation mystery.

- tests that run like shit off a shiny shovel, because everything is mocked, meaning that nothing is actually tested.


>tests that run like shit off a shiny shovel, because everything is mocked, meaning that nothing is actually tested.

Can you expand on this a bit?

I've always worked under the impression that tests should be focused on testing a narrow slice of code. So if my SUT has some dependencies, those will be mocked with the expected result from the dependency, and possibly even with the expected parameters in the arguments. The asserts will check the result from the SUT, but also that the expected method on the mock was called. This way, I'm just testing the code for the SUT, nothing more.


The problem is that bugs rarely occur in one unit. They occur in the interaction between multiple units. The smaller you choose to define what a "unit" is (because everybody has their own definition of what a unit test is), the more true this rule becomes. Extreme case of small unit would be testing every single line individually which obviously nobody has resources to do, doesn't tell you anything about the complete system and which still gives you 100% code coverage!

Your "expected result from dependency" might be volatile or hard to mock due to bugs, state, timing, configuration, unclear documentation, version upgrades or other factors inside the dependency. So when the system breaks while all unit tests are still passing you get this blame-game where one team is accusing the dependency for not behaving as they expect, when the truth is that the interface was never stable in the first place or was never meant to be used that way.

What you have to do is to choose your ratio of system test vs unit test. The scenario GP describes is companies that spend 99% of their testing budget on unit test and 1% on system test, instead of a more healthy 40-60.


Thanks. That makes a lot of sense. So while testing a given class, it may have some dependencies, but those may be external resources (a DB, an API, etc), or internal ones. It sounds like the recommendation is only to mock where those external dependencies lie, and leave the internal dependencies. Eventually, as you go down the chain, those internal dependencies will get to external ones (which will likely still need some sort of mock/fake/stub), but you're allowing more of the logic and interaction of the system to be tested, rather than just the logic in the one class that's directly being tested.


I'm not the GP in question, but I have worked on code that I think fits this phrasing.

In that project's case, 99% of tests were mocks where the only thing being tested was whether or not the mocked function got called the expected number of times or with expected arguments.

So the many thousands of tests ran very quickly, and over 90% of the code was covered by tests; however, nothing was actually being functionally tested in those cases.

In other words, the tests ran like shit off a shiny shovel.


Yes. This.

What then happens is that the mocked out functions change what they return, or the order of what they do inside (e.g so that for a given set of inputs now throws a different exception). Someone forgets to update the mocks. All the tests continue to pass, even though none of the conditions are actually possible in the program.


I'm new to development, would you recommend any resource out there that covers points 1) 2) and 3)? Thanks


I'm old to development, and would like to ask for the same thing. But something beyond "Pragmatic Unit Testing".

You see, I've read that. I've read one other book on unit tests too, and been on 2-day long training in TDD, and spent many hours trying to write unit tests, and yet the skill still eludes me. It's like I have a blind spot there, because I can't for the best of me figure out how to test most of the code I write.

In the projects I'm working on, I find roughly 10% of the code to be unit-testable even in principle. That's the core logic, the tricky parts - like the clever pathfinding algorithm I wrote for routing arrows in diagrams, or the clever code that diffs configurations to output an executable changeset (add this, delete that, move this there...). This I usually write functional-style (regardless of language), I expect specific inputs and outputs, so I can test such code effectively. Beyond that, I can also test some trivial utilities (usually also written in functional style). But the remaining 80-90% of any program I work on turns out to be a combination of:

- code already tested by someone else - external dependencies

- code bureaucracy, which forms vast majority of the program - that is, including/injecting/managing dependencies, moving data around, jumping through and around abstraction layer; this I believe is untestable in principle, unless I'm willing to test code structure itself (technically doable for my Lisp projects...)

- the user interface, the other big part, which is also hilariously untestable, and rarely worth testing automatically, as any regression there will be immediately noticed and reported by real people

I'm having trouble even imagining how to unit-test the three things above, and it's not something covered in unit testing books, tutorials or courses I've seen - they all focus on the basics, like assertions and red-green-refactor, which are the dumb part of writing test. I'm looking for something for the difficult part - how to test the three categories of code I mentioned above.


> - code already tested by someone else - external dependencies

You test for whether the integration works correctly, for example if you use a library for computation you will test whether your function which uses the library is producing the expected result

> - code bureaucracy, which forms vast majority of the program - that is, including/injecting/managing dependencies, moving data around, jumping through and around abstraction layer; this I believe is untestable in principle, unless I'm willing to test code structure itself (technically doable for my Lisp projects...)

Here again you do some integration testing, for example when handling database operations, you can setup a test to load fake data in the database, run operations on this fake data and compare the results, and then purge the fake data.

> - the user interface, the other big part, which is also hilariously untestable, and rarely worth testing automatically, as any regression there will be immediately noticed and reported by real people

This sort of testing has become much more common for web apps and mobile apps, there are two things being tested one is whether the interface loads correctly under different conditions(device types, screen resolutions etc) this is tested by comparing the image for the 'correct' configuration with the test results, the other thing is whether the interface behaves correctly for certain test interactions - this is tested by automating a set of interactions using an automation suit and then checking whether the interface displays/ouputs the correct result.


This "Architecture: the lost years" keynote from uncle bob was helpful for me: https://www.youtube.com/watch?v=WpkDN78P884

I've also struggled with this notion of "untestable" code. Untestable code seems to usually be a big pile of function calls. Testable code seems to be little islands of functionality which are connected by interfaces where only data is exchanged.

Practically, this seems to be about shying away from interfaces which look like 'doThis()', 'doThat()', and more like 'apply(data)'. Less imperative micro-management and more functional-core/imperative-shell.

Edit: a network analogy might be: think about the differences between a remote procedure call API vs. simply exchanging declarative data (REST).


Have you heard of the ideas in https://www.destroyallsoftware.com/screencasts/catalog/funct...? Essentially, his position is that it's only worth unit-testing those tricky bits, and the rest is inherently not worth unit-testing, because you'll almost-certainly embed the same assumptions you're attempting to test into the tests themselves.

In other words, I agree with you, but would question why you're even _trying_ to unit-test those other areas that aren't conducive to unit testing.


Yes, I've heard of the idea of functional core, imperative shell. In fact, that's how I write most code these days. But I don't think I've watched this talk before, so it just went straight into my todo list.

> In other words, I agree with you, but would question why you're even _trying_ to unit-test those other areas that aren't conducive to unit testing.

Well, other people make funny looks at me when they see how little tests I write...

No, but honestly, I see so much advocacy towards writing lots of tests, or even starting with tests, and so I'm trying (and failing) to see the merit of this approach in the stuff I work on.

> you'll almost-certainly embed the same assumptions you're attempting to test into the tests themselves.

That was my main objection when I was on a TDD course - I quickly noticed that my tests tend to structurally encode the very implementation I'm about to write.


The only thing that smelled like a path to enlightenment seemed to be Haskell's QuickCheck [0]. It's actually going out searching for bugs for me, rather than me having to pretend like I can think of all the failure cases. I haven't implemented it in a project yet though so I don't know how much easier it is than unit testing.

[0]: https://hackage.haskell.org/package/QuickCheck


To be fair: much of it it practice and prior knowledge. I personally found the book Pragmatic Unit Testing helpful, as well as the Clean Code chapter on unit tests.

Others may be more able to find you more readily available sources.


Depending on the language:

Unit testing is easy for beginners. Unit testing done well is much less so. This results in many inexperienced people writing poor unit tests.

In my last job, I worked on a legacy C++ code base. 500K lines of thorough testing, but none was a unit test. It took 10 hours to run the full test suite. I set about the task of converting portions for unit testability.

I was surprised how much of a challenge it was (and I learned a lot). There were few resources on proper unit testing in C++. Mostly learned from Stack Overflow. Lessons learned:

1. If your code did not have unit tests, then likely your code will need a lot of rearchitecting just to enable a single unit test. The good news is that the rearchitecting made the code better.

As a corollary: You'll never know how much coupling there is in your code until you write unit tests for it. Our code looked fairly good, but when I wanted to test one part, I found too many dependencies I needed to bring in just to test it. In the end, to test one part, I was involving code from all over the code base. Not good. Most of my effort was writing interface classes to separate the parts so that I could unit test them.

2. For C++, this means your code will look very enterprisey. For once, this was a good thing.

3. Mocking is an art. There's no ideal rule/guideline for it. Overmock and you are just encoding bugs into your tests. Undermock and you are testing too many things at once.

4. For the love of God, don't do a 1:1 mapping between functions/methods and tests. It's OK if your test involves more than one method. Even Robert Martin (Uncle Bob) says so. I know I go against much dogma, but make your unit tests test features, not functions.

5. If your unit tests keep breaking due to trivial refactors, then architect your unit tests to be less sensitive to refactors.

6. For classes, don't test private methods directly. Test them through your public interface. If you cannot reach some code in a private method via the public interface, throw the code away!

7. Perhaps the most important: Assume a hostile management (which is what I had). For every code change you make so that you can write a unit test, can you justify that code change assuming your project never will have unit tests? There are multiple ways to write unit tests - many of them are bad. This guideline will keep you from taking convenient shortcuts.

This advice is all about unit tests, and not TDD. With TDD, it is not hard to test yourself into a corner where you then throw everything away and restart. If you insist on TDD, then at least follow Uncle Bob's heuristic. For your function/feature, think of the most complicated result/boundary input, and make that your first unit test. This way you're less likely to develop the function into the wrong corner.

When I completed my proof of work, the team rejected it. The feedback was it required too much skill for some of the people in the team (40+ developers across 4 sites), and the likelihood that all of them will get it was miniscule. And too much of the code would need to change to add unit tests.

I later read this book:

https://www.amazon.com/Art-Unit-Testing-examples/dp/16172908...

And it contains quite a bit of what I learned from unit testing.


Well that they fail and that they take minutes to run isn't a bad thing; what is bad is that broken things end up on master.

Think of it this way: Any time the build is broken, everyone working on the project is interrupted. If you have 10 people working on the codebase and the build is broken for an hour, that's a whole workday and then some wasted.

You could start a grassroots movement - create a pre-push or pre-commit hook that runs tests before it ends up on the remote. Don't worry about the tests taking minutes to run, if you're waiting on that several times a day you're probably publishing too many small changes over the course of a day.


> Well that they fail and that they take minutes to run isn't a bad thing; what is bad is that broken things end up on master.

You don't understand; they are continually broken on master.

Furthermore, you assume that we only commit to master after reviews etc have been done. This isn't the case. Commits, even intermittent commits, are pushed to master, and reviewed from there.


> Commits, even intermittent commits, are pushed to master, and reviewed from there.

I think most people would agree that's an objectively wrong way to do it.


Agreed. I've been saying for a long time we should do it differently, but so far I've had no success.


You have to make the people love you. Then they will follow your good advice joyfully. In fact they'll follow your bad advice just as much, so be careful you don't get promoted to VP.


It's an important distinction between "our current build process cannot be automated" and "our current build process is needlessly complicated and thus the ROI from automating it is questionable."

You can do a lot of things between bash scripts, PowerShell, etc. Even custom executables.


I'm a big believer in unit tests, but to be fair, this rule isnt quite accurate. Suppose I write a test as follows:

Assert(rand() % 2 == 0);

When the test gets run the first time, it can pass and the code can be admitted, but when the next person goes to add code, it can fail. At this point, you've broken your defining principle: that all the code in the test base is correct. The answer of what to do here isnt clear as its a tradeoff in time spent fixing the tests (or potentially the test framework) and developing features. If you have deadlines this becomes more tenuous. Then you figure out that the reason the test is flaky is because it isnt being run in a real environment...etc.

All I'm saying is that it isnt as simple as this "Rule" would like to have you believe.


What is the environment like? C? Assembly? Windows or Unix?


c#


Any single page version? As discussed here recently, "Skim reading is the new normal"[0], and this version with lots of tiny pages is impossible to skim.

[0] https://news.ycombinator.com/item?id=17841431


We usually moderate pages that are just lists of things, because they're rarely as interesting as the best single item on the list. But I made an exception because this one seemed unusual.


Good judgment. I'm really glad I read this today.



Lovely design. The first time in a very long time I was just clicking around just to see what would happen. Captures the playfulness of the old web, but still modern enough. Works especially great on my rotated monitor whereas the sidebar background becomes increasingly straining on wider displays.


Thanks :)

That’s the idea ... bring back the joy of blogging and meantime capture and grow knowledge.


>http://gordonbrander.com/pattern/start-with-a-toy/

This post reminds me of some PG essay I once read (too lazy to look it up). Did you use the PG essay as reference material? Because if yes it might be nice to put some links that served as inspiration for the article?


I remember reading a book about game design (can't remember the title) that viewed game design through the concept of 'lenses'. One of the lenses was "first make a toy, then make a game around it". For example, a ball is a toy, and football, basketball and volleyball are games.


Yeah the book is The Art of Game Design: A Book of Lenses by Jesse Schell. It seems likely this principle was inspired by the book, the author recommends it on a different page.

It’s a good book which would also be great as a set of prompt cards IDEO style.

Edit: On a whim I googled and it does exist as a card deck as well!


As long as we are here: that book by Jesse Schell is wonderful, I highly recommend it!


Possibly related:

Why Toys - https://blog.ycombinator.com/why-toys/

The Next Big Thing Will Start Out Looking Like a Toy - http://cdixon.org/2010/01/03/the-next-big-thing-will-start-o...


This is an awesome archive. Definitely bookmarking and sharing. Thank you.


...you missed "and never getting back to it again" :-p

I suspect the main value of this particular list was actually producing the list, and mainly for the author's sake.

The presentation is actually pretty, and I'm glad people share their favorite learning/knowledge resources. But I guess I'm coming mainly from my frustration of finding so many things interesting but so hard to choose what to focus my attention on.


Maybe, actually. But I’ll definitely share with others I’m mentoring or training while doing such. And I think I will also finally put the time in collating my version. Thanks (for indirectly inspiring me, heh).


And yes my queue is long and intended as well. So I can relate to having “too much interesting”.


+1 for an awesome personal site. Thanks for sharing! Keep it up! :)


I'm reading through Douglas Engelbart's 'Augmenting Human Intellect' at the moment. He talks about getting his thoughts down to a similar level, so that each one is a small note or thought which can be linked together. This seems similar to what has been implemented on this site.

He talks a lot about 'trails' the people create, so I think the idea is that you'd be able to take this set of notes and links and integrate it with your own set.


Thank you for sharing this wonderful personal notes. I have been thinking to make one for my notes, now I got extra motivation ..


I’ve spent the last year extracting all these kind of learnings (with a UX bias) out of my brain for a book I’ve written. I wish I’d kept notes as I went along like this person. Great stuff.


Thanks for this. I've thought of making my own blog (or at least part of it) in a similar style.

(I've got the blog and the domain set up, I just need to start adding content).


Just add one single item now.


Excellent general life advice


here's a general rule of thumb: a man whose backlog of books isn't as least 2x as long as the books read is either (logical or here) a) consistently reading intellectually shallow or uninteresting things b) not really learning anything from what is being read.


This is very cool.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: