Never delete code. This is why you have git or svn, or whatever your tool of choice is. Never, ever delete code. You may think it's dumb, you may think it's crap, or useless or whatever, but in 2 years, you'll think. "Damn, I remember doing this already, don't I have some code in somewhere?" And you will.
You may look at it and rewrite huge chunks because you're a far better programmer now, but trust me, re-writing code is way easier than writing it from scratch
Nah, SCM tools were not made to be your personal snippet collector, you're better off just getting a real one if you're the hoarder kind of programmer.
Sometimes you just need to burn the pictures of you with your ex and move on. In some way, you should "KEEP" everything, after all digital space is cheap right? But you can keep a lot of code around that you will never revisit in the future.
> but trust me, re-writing code is way easier than writing it from scratch
> > but trust me, re-writing code is way easier than writing it from scratch
> Not always true, and not even often true.
In my experience, virtually always true. Just rereading the code you wrote before will bring back the understanding you had when you wrote it (unless you intentionally wrote obfuscated code, I suppose?), and it'll be immediately obvious to several-years-on you what the shortcomings were of that idea. If you have the time, a full rewrite almost always turns out to be better code than the old version, as long as you can hold off on trying new experiments in the process.
You can always take the experience, but often the old code exists in such a misguided architecture that it is better to scrap it vs. unwind multitudes of bad uninformed decisions (b.c. you know better now!).
Re-writing code is actually almost always harder than writing it from scratch, but we do it for other benefits: interoperability with legacy components, legacy of expected behavior (warts and all), risk (the old code is debugged), and culture (programs in the team know that code). But if you don't have those requirements, you will often come out behind in rewriting all code rather than going with a green field.
It also depends on whether the work one is doing is cutting edge (lots of experimentation and learning required) or basic dev work over relatively well known concepts.
> [...] sudo less /var/log/upstart/app.log, 99999... oh, this log ACTUALLY has 99999 lines. Waiting, Waiting (note to self: google the command to jump to the end, there must be one).
G (as in shift-g) jumps to the end of the file in less. Or use tail instead.
I've been bitten by similar complexities around indirectly managing the database connection pool in Go, too. There might be a little too much magic in the library (such as successfully iterating to the end of a resultset implicitly releasing the results).
That one I'm still not clear on. I THINK I should be closing that particular rows set already, the SQL is LIMIT 1 and I check if !rows.Next(){return ...}, yet... here we are :-)
> In my haste, I'd not noticed, I'm queueing up all of my rows close statements for the function end, which happens after the for loop, which opens way more than the allowed connection limit (about 100 in this case).
Here I thought this was going to be about defer and how it is error-prone compared to RAII and how it is a modern-day alloca with the same type of scope problems, like being unsafe to use in loops, and how it has a weird order of execution with arguments being evaluated immediately and statement evaluated later on.
Instead it's just about having poor project management. A missed opportunity I guess.
1. Use a single DB connection, it will pool automatically
2. Use this pattern for all single row queries:
err = db.QueryRow(`...`, ...).Scan(&...)
if err == sql.ErrNoRows {
// Handle no rows
} else if err != nil {
// Handle actual error
}
// All fine
3. Use this pattern for all multi-row queries where you want to return a slice of structs containing the row values. Note that it is fine to call rows.Close() as soon as possible in addition to deferring it, defer takes care of handling it whenever something goes wrong and the explicit call returns the connection as soon as possible:
rows, err := db.Query(`...`, ...)
if err != nil {
// Handle connection or statement error
}
defer rows.Close()
things := []rowStruct{}
for rows.Next() {
thing := rowStruct{}
err = rows.Scan(
&thing.id,
&thing.value,
)
if err != nil {
// Handle row parsing error
}
things = append(things, thing)
}
err = rows.Err()
if err != nil {
// Handle any errors within rows
}
rows.Close()
4. Use transactions as serial things, if you need to call another query whilst in a loop where you can't rows.Close(), then read the rows into a slice and range over the slice. You must never have two queries running in the same transaction... so code to do one thing before you do another, and be mindful of this if you are passing the transaction to other funcs.
An extra bit of info:
5. defer doesn't just have to be used to call rows.Close(), if you want to know when things happen you can wrap the defer and log:
On which point, beware there are some theoretically uncaught errors, for example tx.Rollback() can return an error http://golang.org/pkg/database/sql/#Tx.Rollback but if you have called it using defer tx.Rollback() after creating a transaction you'll never know. I hope that the only reason that might error is that something has already ended the transaction, but there is definitely scope for deferred finalisation within a func to cause errors that you might miss and it's worth considering the pattern above if you have any mysterious behaviour going on.
With defers and named return values, it is actually possible to alter the return value inside of a defer. I had to do this recently to properly log an error (to abort in the calling code). Also to do with database operations of course:
If you know where the deadlocks are likely to occur, consider turning to a sync.Mutex and wrapping the statement in a lock. It will cause other goroutines to wait until the lock is free.
It all depends where the deadlocks are though, you can easily achieve them in the Go code as well as the database queries.
I'm not around much today as I'm with a client this morning and lunch, but if you're stuck later I may well be in https://gophers.slack.com/ . Happy to help out if I can, as I'm sure most others will be.
You may look at it and rewrite huge chunks because you're a far better programmer now, but trust me, re-writing code is way easier than writing it from scratch