Cook! Where's my dll?

by coatta 5/4/2011 8:58:00 AM

I've been working on a little sample project in DevStudio to check out NuGet. Writing the code, using NuGet, and getting things to build all went fine. But when I went to run the code, it failed. Because I had set the project up as a Windows service, debugging was not entirely straightforward -- all I had were some stack traces in the event log. The service had died because of an an exception: Castle.MicroKernel.SubSystems.Conversion.ConverterException. The stack trace indicated that the exception was occurring during the process of getting logging initialized. After a bit more poking around I realized that there was a DLL missing from the bin folder of the project: Castle.Services.Logging.Log4netIntegration.dll.

So, back to the project to make sure that I've got the reference added for that. Yup, there's the reference. OK, check the Copy Local setting. Yup, set to true. OK, check the universal mind (aka Google) to see what it recommends. Hmmm. MSDN docs say that if the assembly is in the GAC, then DevStudio won't copy the dll to the output directory. OK, check GAC. Nope, that assembly is not in the GAC. At this point I resort to one of my favourite tools: stumbling around blindly.

To make a long (and painful) story short, I eventually discovered that I had the Target Fraemwork for the project set to .NET 4 Framework Client Profile. I'm not sure how I managed that, because its a setting I would never normally use. However, here's what was happening as best I can tell: The log4net integration assembly has dependencies on assemblies that are part of the .NET 4 Framework, but are not in the Client Profile. While DevStudio was happy to let me add a reference to the integration assembly to the project, when it came time to do the build, it apparently regretted allowing me to make that mistake and decided that it should not copy the integration dll to the output directory. Because, of course, it wouldn't work since its dependencies would not be satisfied.

Sadly, DevStudio failed to inform me of any of its internal considerations of the issue resulting several hours of suffering (for me, not DevStudio... how I wish I could inflict pain on DevStudio).

Lesson: If an assembly fails to copy to the output directory, check your target framework and makes sure its consistent with all your references.
 

Pools of ACID

by coatta 12/19/2010 1:28:00 PM

There's been a lot of interest recently over alternatives to the tradition DB-based application, things like NoSQL and BASE. A lot of the dicsussions that I've heard about these systems seem to imply that these approaches are drop-in replacements for the older technology, which is not true. Both these, and various other alternative appoaches to working with persistent data, provide different semantics from a SQL DB coupled with transactions. In particular, transactions make it look like your working with a single-threaded system. In the context of a specific transaction, data isn't going to change underneath of you. Furthermore, you know that you won't be exposed to data which is in an inconsistent state, like a reference to a non-existent row in another table. These are pretty powerful guarantees. They make it relatively easy to reason about how your program is going to behave.

The trouble is that this power comes at a cost. One of the most significant of these is the CAP Theorem, which says that if you're going to have transactions that span DB's across multiple machines you system can't keep working in the face of failures. Some people seem to have responded to this state of affairs as though the only solution was simply to get rid of transactions. This strikes me as a bit of "baby/bathwater" scenario. Not using transactions at all makes programming much more complicated because your code needs to start dealing with more scenarios, such as dealing with data that is not in completely consistent state.

I think a better approach is what I call pools of ACID. Using this approach, a system is decomposed into a set of cooperating processes. Within each process, transactions are used to provide a straightforward programming model. Processes interact with each other through a more loosely coupled request/response protocol and transactions are not propagaged from one process to another. Using this approach, you make conscious decisions about where transactions are most useful and where it is viable to have weaker guarantees.

We are currently using this approach where I work (Vitrium Systems). Each of our cooperating processes uses NHibernate / SQL Server to provide a transactional object model which is reasonably simple to work with. Processes are connected via NServiceBus. One of the nice things about this collection of tools is that the interaction with NServiceBus is transactional, even though transactions don't span processes. That is, the act of making a request through NServiceBus is part of the overall transaction associated with handling a request. If a failure occurs at any point in handling the request and the transaction is rolled back, then the NServiceBus request is rolled back as well.

Overall, this architecture is working well for us. It allows us to use the power of transactions where appropriate, and still achieve the degree of scalability and robustness that we need.


Calendar

<<  April 2024  >>
MoTuWeThFrSaSu
25262728293031
1234567
891011121314
15161718192021
22232425262728
293012345

View posts in large calendar

Disclaimer

My opinions are my own, but you can borrow them if you like.

© Copyright 2024

Sign in