Hacker News new | past | comments | ask | show | jobs | submit login
Building a testable Go web app (sourcegraph.com)
93 points by rgarcia on Oct 5, 2014 | hide | past | favorite | 14 comments



I personally actually prefer your testv0 method. The way I understood it is a full stack testing that requires putting data into the DB and query them back.

What's wrong with that? It might be just a bit slower but for me that kind test is a lot of comprehensive. I have in the past has to slightly change the DB schema. Because of full stack test coverage were in place, full functional tests make all the bugs caused by schema change visible immediately.

For me, schema changes in DB cause some other operations to the DB failure is much difficult to detected and fixed if the issue is not visible to the developer immediately.

It might be slower (~a few minutes for ~300 full funtional test scripts run directly from the web-browser side) instead of seconds in your case, but I think the time is well spend for how quickly complex issues can be discovered and fixed.

BTW, I keep track of all DB operations (sql query) time as part of the json response. If they are > 1 milliseconds (SSD HD), I will flags them as potential issue in bugs DB for future investigation for SQL optimization.

It should be easy to add the SQL query response time to a performance DB to track and change over time/sw version.


Author of the post here. I totally agree that comprehensive tests are still necessary. The blog post/talk don't cover this, but we actually have both kinds of tests - unit tests to test components in isolation and end-to-end tests that hit functionality from the front-end all the way to the DB.

In our experience, unit tests are useful because they run quickly (so you can run them literally every time a file changes), and let you much more quickly figure out the root cause of a failure. To us, the ability to run tests in a few seconds vs over a minute is important, because we want the dev feedback loop to be as immediate as possible. Just as we would hate waiting on slow compile times, we also hate waiting on slow tests.

That having been said, we also run comprehensive end-to-end tests before pushing changes to the site to ensure everything works properly. We think both are important to ensuring site quality and keeping the dev cycle efficient.


I agree with you, and for me the classification "integration" vs "unit" doesn't add anything. I'd rather distinguish between slow and fast tests.

In my opinion TDD is top-down (instead of bottom up) you must start with these things because you are not sure the abstractions you will have at the end. Your design and abstractions need to evolve from tests.


In my experience TDD is orthogonal to project design strategy. Whether your testable goals serve a bottom-up progression or a top-down progression, TDD is still a useful mechanism for advancing the feature set.


You can cut and slices test automation however you want to.

I usually have tons of unit-tests that rely on the use of mock, a lot of DAO/ORM integration-tests that require some sort of in-memory-DB (I'm fortunate enough that DBs such as H2, HSQLDB, Apache Derby exist in my world) to test queries (that includes DB migration testing).

The unit-tests and integration-tests cover a lot of code-branches (I use EclEmma/Cobertura to view coverage).

I still have full-stack functional tests but they don't have high coverage. They only ensure that the happy-path works.

I love my unit-tests because I can write them immediately without waiting for the DB/Storage to exist or to work.

I love my integration-tests because I don't have to wait until the front-end or some level of business logics are done (my DAOs/Repositories layer are kept minimal, most business logics exist in my Services layer, which are unit-testable).

If I have plenty end-2-end bugs, I'll revisit my strategy.


The article isn't that clear, but mocking out the backend to test the frontend faster is a fairly common move. It lets you do more frontend tests in less time. Of course, you still have to do testing with a real database.


I didn't see any schema tests for the db. Testing the DAO is its own concern, and I can't consider that as unit test coverage for the schema portion of the project. I would normally be looking to something like PgTAP here.

(For the "oh really?" crowd, here's a brief rundown on what I often test: object permissions/owners, function correctness, non-trivial constraints, triggers (usually existence testing, sometimes functional verification), application-global data. And yes, it is great to have all that nailed down before diving into writing a DAO)


They switched from angular to essentially a static site for SEO reasons. I wonder if they're batch generating the static pages or making them dynamically on request.


We generate the pages dynamically. When code gets pushed to a repository indexed by Sourcegraph, we re-analyze it, which means that the location of definitions/references changes and the stats about who uses what change, too. This happens fairly often, so it's easier just to generate pages dynamically.

If you want to get an idea of our site architecture, you can check out https://sourcegraph.com/github.com/sourcegraph/thesrc. (It's a link aggregation site (thesrc.org) that pulls from HN, Reddit, and a few other places and only displays links with programming-related content.) We made it to share some patterns we found useful for building a web app in Go, and the interfaces and structure mirror how Sourcegraph.com is designed. There's also a talk here if you're interested: http://www.youtube.com/watch?v=7zYXhhrRn2E.


That hits close to home. I recently built a single page app with Backbone and SEO traffic is non-existent.

So now I'm dynamically generating pages on the backend, and serving them along with the javascript app so there's some some indexable static content. There are a few ways to directly reuse your client-side code on the server, but they all seem pretty hacky and convoluted.

I bought into the notion that the backend should just be a client-agnostic API, but that's an extreme position that should be considered when your site/app doesn't have any indexable content to begin with. If I could only go back in time...


ReactJS is awesome here. You write clientside code but you just run that clientside code on the server on request to prerender an initial DOM. And then your client hooks into that DOM. It has worked like a charm for me.


I tried an early version, and I was impressed, but my app relied on an CSS transition that was causing an issue with React. It looks like they've improved on that with ReactTransitionGroup, so I'll check it out again. Thanks!


> I bought into the notion that the backend should just be a client-agnostic API

There can be more than one backend -- your API is a microservice and you can have another backend that proxies this service and renders templates server-side. This also means you can define SEO-friendly URIs as opposed to your presumably RESTful API endpoints.


Yeah, this is the approach I've transitioned to. It's certainly better overall, but there's still an issue of maintaining some duplicated code on the server side and the client side.

There are some projects that specifically address this (Rendr, Lazo, Ezel), but I haven't made the switch to one of those yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: