Pools of ACID

by coatta 12/19/2010 1:28:00 PM

There's been a lot of interest recently over alternatives to the tradition DB-based application, things like NoSQL and BASE. A lot of the dicsussions that I've heard about these systems seem to imply that these approaches are drop-in replacements for the older technology, which is not true. Both these, and various other alternative appoaches to working with persistent data, provide different semantics from a SQL DB coupled with transactions. In particular, transactions make it look like your working with a single-threaded system. In the context of a specific transaction, data isn't going to change underneath of you. Furthermore, you know that you won't be exposed to data which is in an inconsistent state, like a reference to a non-existent row in another table. These are pretty powerful guarantees. They make it relatively easy to reason about how your program is going to behave.

The trouble is that this power comes at a cost. One of the most significant of these is the CAP Theorem, which says that if you're going to have transactions that span DB's across multiple machines you system can't keep working in the face of failures. Some people seem to have responded to this state of affairs as though the only solution was simply to get rid of transactions. This strikes me as a bit of "baby/bathwater" scenario. Not using transactions at all makes programming much more complicated because your code needs to start dealing with more scenarios, such as dealing with data that is not in completely consistent state.

I think a better approach is what I call pools of ACID. Using this approach, a system is decomposed into a set of cooperating processes. Within each process, transactions are used to provide a straightforward programming model. Processes interact with each other through a more loosely coupled request/response protocol and transactions are not propagaged from one process to another. Using this approach, you make conscious decisions about where transactions are most useful and where it is viable to have weaker guarantees.

We are currently using this approach where I work (Vitrium Systems). Each of our cooperating processes uses NHibernate / SQL Server to provide a transactional object model which is reasonably simple to work with. Processes are connected via NServiceBus. One of the nice things about this collection of tools is that the interaction with NServiceBus is transactional, even though transactions don't span processes. That is, the act of making a request through NServiceBus is part of the overall transaction associated with handling a request. If a failure occurs at any point in handling the request and the transaction is rolled back, then the NServiceBus request is rolled back as well.

Overall, this architecture is working well for us. It allows us to use the power of transactions where appropriate, and still achieve the degree of scalability and robustness that we need.

Microsoft Volta

by coatta 1/27/2008 9:21:00 PM

I read through a lot of Microsoft's material on Volta the other day. I sense the turning of the karmic wheel of software development. They note on their blog that Volta "enables developers to postpone architectural decisions about distribution." Well, having spent 3 or 4 years deeply involved with building infrastructures on top of CORBA, I can safely assert that this is exactly one of the goals that we had in building those infrastructures, and I think it is safe to say that it was one of the primary goals behind CORBA itself. In fact, the whole point of location transparency was just that: to completely isolate code from decisions about where objects actually reside. Oddly, the Volta folks point out that they acknowledge the failure of location transparency by referring to the fallacies of distributed computing. But location transparency is not really one of those fallacies; the fallacies are about more fundamental issues in distributed computing such as the fact that latency is not zero and cannot be ignored.

One of the biggest problems with location transparency is that people misinterpreted it as meaning that location could be ignored. But anyone who was seriously involved in research in distributed computing -- including the folks who designed the CORBA specs, knew that was not the case. Location transparency was about creating systems in which the syntax and semantics of invocation were the same regardless of where an object was located. And the primary reason why location transparency was a goal in CORBA was precisely because it allowed the physical mapping of objects to servers to be changed without having to change code. Sounds oddly like allowing "developers [to] architect their applications as a single-tier application, then make decisions about moving logic to other tiers late in the development process" -- which is straight from the Volta FAQ

Part of the downfall of CORBA was that location transparency allowed people to build systems badly. In fact, I think it would be safe to say that unless you were thinking very carefully about patterns of object interaction, the physical locations of objects, the types of partial failures possible, etc. you were pretty much guaranteed failure. The Volta folks seem to be treading in similar territory. Unless they have mechanisms in place to prevent people from building poorly architected systems, then people will build poorly architected systems. And it will only be too easy to have the technology be the scapegoat.


<<  April 2024  >>

View posts in large calendar


My opinions are my own, but you can borrow them if you like.

© Copyright 2024

Sign in