Hacker News new | past | comments | ask | show | jobs | submit login

I've heard lots of complaints about mysql, do you know if query deadlocks are more common with mysql than with something like postgres? I've been running into that a lot lately, and I don't understand what the hell is going on. None of the queries I'm executing have particularly long run times (usually a few milliseconds, occasionally 100ms), but I still get random deadlock errors from time to time. I've never run into this on any other database platform (most of my experience is with Oracle and SQL Server), and I have no idea what the hell is going on.



We also have no idea whats going on with your system. I ran into this twice that I can recall. Once was a pathological SELECT FOR UPDATE where it was locking up way too much stuff, and another was a time where a boring looking update was locking a whole bunch of seemingly unrelated indexes.

So for the two examples I ran into all I can say is make your transactions as small as possible and take a close look at ALL the indexes especially if you think they're unrelated.

There's an option for logging the intimate details of deadlocks into the log file (or error file or whatever it was) anyway the main PITA is having to restart the server to enable it. On second thought the real PITA is operational in the "watched pot never boils" tradition as soon as you enable it the intermittent problem seems to go away, just dumb luck.


The query in question is an UPDATE mytable SET flag=1 WHERE ID IN(...) statement. I'm assuming that another system was in the middle of a performing a SELECT statement on the flag column.

In this particular case it's perfectly ok for the select statement to get slightly incomplete data (e.g. returning a record with a flag that might have just been changed), so it would be nice if I could configure the table so that select statements never lock. Is it possible to do something like that?


The first stack exchange when I googled was pretty interesting. For completeness, if you use innodb as your engine

SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

Seems to be the line you are asking for. There's the usual way to make it a system default in the config file. As for the difference between what you are asking to do, and what you should do, well... be careful. Especially if you're feeding the dirty read back into something that could lead to an update. That could be icky.

There's some way to do myisam stuff.

Generally I've found if you've got two things trying to lock one thing, overall processing goes a lot faster if one greedy dude grabs it for the whole thing. Cooperative multitasking doesn't work if it spends way more time lock switching than actually processing. Of course greedy processing is an excellent way to jam things up if you're not careful.

The funniest discovery I ever made about that related to importing zillions of rows with autocommit off it was dramatically faster rather than demanding an I/O store. This was the kind of data where if someone yanked the power cord it would be auto-re-imported without any loss; if your only copy of the data is in RAM and the INSERT you just ran, then autocommit off might be unwise.

Your IN (...) if ... is a select that might none the less be similar to my fun time a couple years back with dueling indexes. This is the origin of the whole never index more than you have to meme because it doesn't just waste storage and write time, an "innocent" looking pathological index intended to make some report run faster can kill read performance if its touches too many things other indexes touch while "separate" processing is happening. If you don't know A leads to B, B leads to C, C leads to D, then why does messing with A kill D performance when D should be totally separate, ah forgot about that peculiar linkage...

Note that none of this is a mysql problem. You can shoot yourself in the foot with a common 9mm or a weirdo caliber. More people get hurt by the 9mm not because it inherently magically hits feet more often but because theres just more people using it. I'm quite sure you can screw up Oracle or DB2 the same way if enough people try hard enough. I guarantee if you google you're going to find more "mysql did something weird" stories than any other DBMS simply because more people use mysql.


Thank you for helping me, this is great.


Deadlocks have little to do with queries and everything to do with transactions. They happen for a very good reason and never happen "randomly".

I assume you are using the innodb storage engine: http://dev.mysql.com/doc/refman/5.7/en/innodb-deadlocks.html


I have run into this from time to time, occasional deadlocks in MySQL. I think it was caused by occasional exceptions while in the middle of writing a transaction, then me leaving the connection open via connection pooling without doing either a commit or rollback.

The obvious way to fix this is funnel all database writes through a common set of code that always avoids a problem.

But there's another way I've fixed this, too. In the past I have also fixed this by moving my highest volume writes into a queue that happens in a few small threads that separate in a way that avoids all contention. This architecture lets me smooth out load spikes and gives better performance anyway.


I think this would be a difficult question to answer, without knowing the specific case why you are getting deadlocks.

What you want for best performance is optimistic concurrency control. Which means that you need to allow for "deadlocks happen".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: