Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As soon as you let users edit data, you can't really benefit from serializable transactions.

Partly because you don't really want arbitrary long transactions that span however long the user wants to be editing for.

Partly because it's rather rude to roll back all the users edits with a "deadlock detected, please reload the form and fill it out again".



I, uh, do want to point out that the alternative here is not "everything is OK". If you don't abort when, say, two users update the same row concurrently, then you might cause (e.g.) silent data loss for one of them. Or you might end up with a record in an illegal state--say, one with two different fields that should never be in their particular states together. You have to look at your transaction structure, intended application invariants, and measured frequency of concurrency to figure out if using a relaxed isolation level is actually safe or not.


IME in this model, the middleware-DB transaction were set to be serializable, but the web-user-edits were done under an optimistic concurrency model, using versions or timestamps. You’d run into edit conflicts, which for many applications is a reasonable compromise.

The DB transactions would need to be kept open for user edits only if one were using a pessimistic model.

Am I thinking about this correctly?


Most systems I've worked on would just let users completely overwrite eachother and would neither hold open a transaction nor use versioning. For those that didn't behave this way, I think versioning is the sanest option (as long as requirements permit it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: