Hacker News new | past | comments | ask | show | jobs | submit login
Winning is the worst thing that can happen in Vegas (37signals.com)
74 points by mh_ on Jan 3, 2013 | hide | past | favorite | 58 comments



Where is this strong reaction to over-design coming from? Full disclosure: I earned most of my salary in 2012 doing consulting. Fortunately, most of it was from working on open source projects. But a component of it was going in and getting our hands dirty in clients' apps.

Not a single one suffered from over-architecture; indeed, the common thread to all of the problems I saw is that people would intentionally put off the "just in time" architecture improvements that DHH recommends in favor of just one more feature.

By the time we got there, the whole thing was about to collapse under its own weight.

And that's exactly my point: this seems to be a solution in search for a problem. Under-architecture is much, much more widespread of a problem than over-architecture, based on my own anecdotal experience and the horror stories told by my friends over beers.

Software engineering is about striking a balance between choosing an architecture that can grow with your project while being simple enough for you to ship features today. The dogmatic approach from both sides in this ideological war is bizarre. Remember that prescriptivism against prescriptivism is still prescriptivism.


I've experienced the problem of too much flexibility when working on some enterprise software products at a former employer. We believed we were doing the right thing by keeping the analysis phase short and just building a generic capability for roughly what the customer needed. Boy did it cost us! Testing, support and future rounds of analysis all took much longer because we had to deal with all the flexibility that the system had. And the customers didn't appreciate it much either, they were only concerned with their limited set of use cases.


Maybe this is survivorship bias. The under designed ones that are ones that make it to become a mission critical piece of software, the over designed ones never seen the light of day.


If I had to guess, I'd say the people who read 37 signals' blog (and HN) are much more likely to over engineer than under engineer. Most of the client work (and I've done my fair share of consulting) I've seen suffers from exactly what you describe, but those aren't the people who are going to read 37s' blog.

However, I've also worked on the opposite end of the spectrum, lots of turbo nerds who were big on planning for every eventuality (and WERE the types to read 37s etc) and that suffered heavily from exactly what he describes here.

Know your audience!


> Software engineering is about striking a balance between choosing an architecture that can grow with your project while being simple enough for you to ship features today. The dogmatic approach from both sides in this ideological war is bizarre.

Exactly and the balance is not an easy thing to do.

YAGNI and KISS principle are often misunderstood and used as a bad excuse to ship crappy code, but used correctly these principles contribute to a good architecture.


> Every little premature extraction or abstraction, every little structure meant for “when we have more of this kind”, every little setting and configuration point for “things we might want to tweak in the future”. It all adds up, and at a dangerous rate.

My experience is exactly the opposite. Every time someone's told me, "A user's account will only ever have one event associated with it", or "An order will only be paid for with one credit card" or "A credit card payment will only ever apply to one order", that someone is wrong, and it ends up costing us days to fix.


I came here to post something similar. YAGNI has burned me a lot more times than premature optimization. Perhaps it makes more sense to draw a distinction between designing for flexibility and what we might call naive flexibility, or optimizing for use cases that a more experienced developer would know are highly unlikely to be needed.


Of course, you don't (can't) see the times where YAGNI has helped you.


This thread is confusing to me so I'm probably misunderstanding a few things.

First Ryan's post says he's saying something similar, except it appears to be opposite to the post he's replying to.

Then you say you can't see the times where YAGNI helped you, but that's neither true literally and also in spirt. Literally because it wouldn't be YAGNI if you did actually need it. But more importantly I have often discovered some architectural choice or unused feature stubbed out from years prior that saved me a ton of time in the here and now.


I was replying to pavel_lishin's parent comment, in which he noted that he has been burned many times by not optimizing early for flexibility:

http://news.ycombinator.com/item?id=5003341

My experience is similar to pavel_lishin's, and our experiences are contrary to the author of the article.

I hope that helps to clarify the first bit, anyway.


I'm certainly a "simplest thing that could possibly work" type of guy as described in David's post.

A somewhat unspoken but generally understood part of this philosophy is that while you don't go out of your way to make things super flexible, you also make sure you don't code yourself into a corner that will be hard to get out of should the requirements change.

Knowing how to balance this sort of thing well takes years and years of experience (IME) but it certainly isn't impossible.

As a side note, something taking "days to fix" is no big deal considering a big part of the desire to not over-architect is to avoid situations where a fundamental and ultimately bad assumption results in a situation that is going to take months to fix with the result being a mostly new and untested code base and all the pain that comes with that.


Half of this is understanding the probabilities of certain things happening. This involves estimation of both known and unknown unknowns. The other half is knowing how to keep things flexible without having to write a lot of extra codes. Where are the architectural choke points to apply the tourniquet to allow for later expansion?

Aside from being an expert in abstraction, domain knowledge is also a key component to make these decisions. You're right that it takes years to become competent at it, and even then there's a good amount of luck involved.


In my experience, this is true - but it's also extremely difficult to predict what spec changes the users will surprise you with. If you add unnecessary abstractions, generalisations and configuration, you still spend days fixing later if they weren't added in the right places.

> "An order will only be paid for with one credit card"

Perhaps there will be a request to allow payment in multiple parts for orders, so adding the generality from the start saves time. But perhaps instead the spec change is to allow payment of multiple orders at a time. Or partial payments of multiple orders. Or payment in foreign currencies and bitcoin. Or not paying orders at all, but marking them "completed" for accounting at the end of the year.

Altogether, it's not possible to correctly predict the future feature set of the project to decide on the correct places to add abstractions, unless you are really great at product design and can read the future minds of whoever is coming up with the requirements.


That's a different kind of failure, if you ask me. Those are uses that weren't foreseen or designed for, not code that wasn't used.


A lot like his post yesterday, this is really more content-lite self-branding from DHH. I get it--you ain't gonna need it. Yeah, yeah--you're opinionated. Cool.


I'm not one for the usual HN cynicism, but I have to agree. DHH is much smarter than these trite truisms. This would be an impressive article for an 18-year-old upstart, not someone who's ostensibly a thought leader in programming.


Say you have a simple application and the only possible use case is to increment an account by $100.

Should you write the function as

  account.credit_100_dollars()
  account.credit_dollars(100)
  account.process_transaction(new credit_transaction(100))
I have to say I prefer option 3 to option 1 even considering YAGNI. It's better OO, it's more testable, there's more potential for reuse in other parts of your application etc.

Interestingly, when I started as a developer I might have written something closer to the first option. After a few years I would have gone for something closer to the third option. Nowadays, I'd probably go for the second option as I've hopefully learnt to strike a balance in the abstractions I create.


None of those three examples expresses intent. Why are you crediting 100 dollars? Is it a refund, is it a loyalty reward, a signup bonus?

    account.award_loyalty_bonus()
    account.award_welcome_bonus()
This is closer to the first option, but will actually model the behaviour of your application using domain terms.


If I understand what benjamin is saying, these would be written as:

account.process_transaction(new loyalty_bonus(bonus_amount))

or possibly

account.process_transaction(new bonus(loyalty_bonus_amount))

depending on the business logic.


Not quite, the point is that the caller doesn't care how much the loyalty bonus is, only that the bonus it is awarding is for loyalty.

The value of the bonus then gets captured in the domain object. If there is enough related logic to justify having a loyalty bonus object, then you might have

    account.award(LoyaltyBonus.new)
Or, if we're going to start slicing things up like this, we might go for:

    LoyaltyBonus.award_to(account)


How is it more testable than the second option?

  previous = account.balance
  credit_amount = rand 1000
  account.credit_dollars credit_amount
  assert_equal previous + credit_amount, account.balance
And more importantly, how is option 3 a better idea for reuse?


With even a dash of business logic on the transaction object I think 3 is better code.

It's more testable because we can test the concept of a transaction rather than the net result of incrementing the account balance - e.g. test for a business rule to ensure that we can never have a transaction with a negative credit.

It promotes single responsibility because logic pertaining to the transaction can be pushed back onto the transaction object where it is defined once and where it more naturally sits - e.g. validate the transaction or render yourself for printing.

It promotes reuse because the transaction object could possibly be used in another context far removed from this particular scenario - e.g. any logic for validating or rendering the object would sit within the account object otherwise. It makes sense to bring it out where we could potentially reuse it later.

This is just object orientation 101, and it preaches the flexibility that the OP is against. Good OO code is code that is flexible and contains a certain degree of abstractions by design.


But you've lost the abstraction of crediting somebody! Now you have to know about transactions to give out that credit, and while it might be more flexible, you've lost your encapsulation and abstraction.

Option three would be suitable for the implementation of option two, I'll give you that, but if I'm designing an admin control panel I want to bind a control to credit an account, not to create a transaction to credit something and have an account process that transaction.


The point of using a standardized transaction object is that you can than track/manage all such transactions using roughly the same logic. Consider: crediting an account is merely a negative transaction, so there is no point in creating extra overhead or code to manage crediting transactions separately.

This would allow you, for example, to track and handle all transactions, including crediting transactions, using the same code base, with special transaction-type-specific issues dealt with by the transaction-type object.


Reminds me of an old joke: Programmers are good and pacifist people. A good programmer will never write a function like "bombHiroshima". Instead he will write a function "bomb" that takes Hiroshima as a parameter.


Loved this post, but "The casinos in Vegas are primed for this by making it relatively likely you’ll win something early on." is false.

Casinos in vegas have no idea if you're deep into a gambling session or not. They take small statistical edge on every transaction (or a rake).


Also, the Nevada Gaming Commission frowns upon changing the odds of a game while a gambler is playing (at least for slots -- I believe this is true for most other games also): http://www.lasvegassun.com/news/2008/feb/17/dont-worry-your-...


>> Casinos in vegas have no idea if you're deep into a gambling session or not.

Yes and no. The slot machine has no idea. But the casino certainly does. They do stuff like track you with membership cards, detect when you're frustrated and about to leave, and "helpfully" show up to offer you a free meal so you'll stay.

It's diabolical, really.

http://www.npr.org/blogs/money/2011/11/15/142366953/the-tues...


I think this statement is supposed to be relating to the probability of winning something on any given bet, and not necessarily the expected value of the bet.

For example, your probability of winning when betting on black is higher than betting on 12 (playing roulette). But you EV is still the same (or close).

I think roulette is an exception in that it lets you bet with fairly low odds of winning. Most other games (BJ, Slots, etc..) keep the odds of winning something closer to 50-50 on each transaction.


It's true in the sense that the games are designed to pay out small wins fairly often. Rather than big jackpots once in a great while.


Except for a few full pay video poker machines which actually give the player an small edge given perfect play.


Funny writing coming out of an author of Ruby on rails. "The ability to say No". So.. once upon a time, there was the decision to create web apps. Rather than using the existing languages and tools, a whole new web framework was created from scratch. Let's talk now about abstraction and shit.

Note that I am, by far, not! against the creation of library and new framework. It just so happens that this article is ironically funny concerning that aspect.


Everyone loves architecture, just as long as they're the architect.


This is incorrectly titled - should be "Winning is the worst thing that can happen in Vegas".


dhh did a typo, HN automatically picked up the old title. Maybe some mod around here can change it.


You really can't be a good developer on rules of thumb. The best code just does the minimum it needs to do (YAGNI) but still has the necessary abstractions for for easy extensibility in the future. It's a fine line to walk on.


Anybody get the sense that DHH's New Year's resolution was blogging more?


He has admitted all these posts have been triggered by the ruby rogues podcast private mailing list where heated discussion is going on currently.


If only he would share some of that context in the actual posts. Reading them is like watching someone argue with another person you can't see or hear.


He really needs to do more self promotion.


"The casinos in Vegas are primed for this by making it relatively likely you’ll win something early on" seems like mushy thinking. Perhaps they make it relatively likely you win at any time, but there's no difference between "early on" and any other time.


If a game can give way to win back small amounts frequently, but still lose on the long-game, it can trigger this effect.

You're still likely to lose in the long game, but get small wins frequently enough to keep you from quitting.


Any other time includes "early on". It just happens that this moment is of particular interest.


I think that there is a middle ground to be found in this aspect of development. On the one hand, you don't want to overengineer your code. On the other hand, requirements will change.

Based on this, I personally design object models to be easily extensible. The guidelines are simply best practices (separation of concerns, methods which only do one thing, minimising side effects as much as possible). Nonetheless, I agree that the minimum should be used given that things which should be abstracted have been abstracted.

For example, I was implementing reports recently. Overengineering is coming up with a complex reporting engine which managers can use to create anything they want. Underengineering is writing each report from scratch. The golden middle, in this case, was a lightweight framework where you extend a base class, specify the query and you're done. Everything else is automagically taken care of. So one need not worry about fetching the actual results, displaying them, sorting them, adding the report to the menu, etc.


I generally agree with his overall YAGNI message. Architecture astronomy and smart complexifiers on projects that could be simple are a couple of my biggest development pet peeves.

I'm not sure I follow how that equates to technical debt though. If anything, unnecessary but well-implemented abstractions and over-architecture are the opposite of technical debt, aren't they?

In my experience, it's all too easy to find developers who will "say no" too eagerly, erring on the side of sloppy copy/paste duplication, inconsistent structure, and poor maintainability overall. Given the choice, I'd much rather walk into more projects with too much architecture than the one with little-to-none which are more common.


Just as many best-practices can be applied too early, they can also be applied too late. DHH's rant could just as easily be aimed at too-late application of such practices.

I've found that when I deliberate over these kinds of questions too much it's usually because my design is not quite right, and that when I eventually discover an elegant design the path to flexibility presents itself in an optional and unobtrusive way.

Many designs are possible but the right design is often the one that is easy to think about at various levels of complexity.


I agree with this ideology. However what I think what is missing from the post and the comments is mentioning that if you are going with a under engineering design philosophy you should be constantly be refactoring the code.


Opinionated as usual, mr Heisenmeyer-Hansson. But his way is not always the only (or best) way :-)

Still I respect his opinionatedness!


Poker players refer to the phenomenon as "winner's tilt". It will make you go broke if you're not disciplined.


unfortunately i think many people see that kind of "formal" architecturing as software engineering

related: http://zedshaw.com/essays/master_and_expert.html


I'm not sure I like the title. It seems to be clumsily conflating casino gambling with bad software practices. The first is not a good metaphor for the latter, especially since most of those premature abstractions come out of FUD and attempted risk aversion. (We won't be able to scale if we don't use design patterns!)

However, the insights into premature abstraction are good.

Here's a beautiful, and relevant, essay from Steve Yegge: http://steve-yegge.blogspot.com/2008/08/business-requirement...


I strongly, strongly disagree with the idea that you should only build products where you're the end user. There are many interesting problem domains where the end users aren't suited towards building the solution.

My company does Machine Learning for Sales. There aren't a huge number of people who can do both programming and statistics well. Add Sales to that skillset and there are, what, 5 people in the world who can do all 3? The odds of finding a sales person who wants a predictive sales tool and has the capability and interest to write it are vanishingly small.

For a consumer-oriented example look at Coursera or other MOOCs. Andrew Ng is not a Coursera user (i.e. student.) He understands students, and that's enough-MOOCs provides enormous value because they solve an important problem. Students would have had a much harder time creating Coursera than Ng.


I strongly, strongly disagree with the idea that you should only build products where you're the end user.

I as well. Most software sucks because the people building it aren't motivated. Building for oneself is one source of motivation, but it's myopic to assume that it's the only source of motivation that works.


Is this a metaphor?

It seems to me it isn't saying that casinos are like software development, but pointing out human psycology using the casino as an example.

The same reward system is there because the psycology (human) is the same.

TLDR; If I compared the performance of the same car on a road to a dirt track, I'm making a comparison, but not a metaphorical one.


Fair. I think the analogy goes this far: in gambling, there's "beginner's luck", which is actually selection bias in favor of people who have early luck. (Beginners with shitty luck stop playing and therefore don't "begin".) In software, there's the tyranny of past success: the tendency to fight the last war.


DHH admitted on twitter, it was a editing oversight, the title was supposed to be, 'Winning is the WORST thing that can happen in Vegas'


I was talking about the correct title.

He's comparing gambling (e.g. risk-seeking) with software development practices that are more often than not an incompetent expression of risk-aversion.


Also makes poor assumptions about behavioral science.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: