Skip to content

Iloggable

NoSQL is the new Arcadia

Recently there's been a lot of people lamenting the sheep like mentality of picking RDBMS (and with it ORMs) as the way to model persistence, without first considering solutions that do not suffer the object-relational impedance mismatch.

Many of the arguments for having to use RDBMS' are easily shot down, such as the relentless requirements for adhoc reporting against production data (If your OLTP and OLAP are the same DB you are doing it wrong™.) But just because the arguments for picking an RDBMS are often ill-considered, the reasons for abandoning it also seem to suffer from some depth of consideration.

Let me be clear that I do my best to stay away from RDBMS' whenever i can. I have plenty of scars from supporting large production DB environments over the years and there are lots of pain points in writing web applications against RDBMS'. I, too, love schema-less, document and object databases. They make so much sense. I rabidly follow the MongoDB and Riak mailing lists and prototype projects with them and others NoSQL tech, such as Db4o. However, following those lists it is clear to me that a) they are still re-discovering lessons painfully learned by RDBMS folks and b) my knowledge of working with these systems when something goes wrong is woefully behind my knowledge of the same for RDBMS.

Pick the best tool

So yes, marvel at the simplicity of mapping your object model to a document model, or even serialize that object graph using an object or graph DB. But don't just concentrate on what they do better for development, ignoring the day-to-day production support issues. Take a minute and see if you can answer these questions for yourself:

Can you troubleshoot performance problems?

Every RDBMS has some kind of profiling tool and process list. And on the ORM side, Ayende's Uberprof is doing a fantastic job of bringing additional transparency to many ORMs. Do you have any similar tools for the alternative persistence layer? Do you know what's blocking your writes, your reads? What's slowing down your map/reduce? What indicies, if applicable, are being hit? And if you're using a sharded setup, profiling just got an order of magnitude more complicated.

What about concurrency on non-key accesses?

Key/value stores are much faster than even primary key hits on RDBMS. And document databases let you store the entire data hierarchy instead of normalizing them across foreign key tables making graph retrieval cheap too.

But as NoSQL goes beyond simple key retrieval with query APIs and map/reduce, concurrency concerns sneak back in along with the query power. Many NoSQL stores are still using single threaded concurrency per node or at least data silos (read: table locking).In RDBMS land, mysql was the last one to solve that and it did it 6-7 years ago.

What tools to you have to recover a corrupted data file?

Another set of tools you are guaranteed to find with any RDBMS are utilities for recovering corrupted data and index files. Or at the very least utilities for extracting data from them in case of catastrophic failure.

With many NoSQL stores using memory mapped files, corruption on power loss or DB crash is not uncommon. Does you persistence choice have ways to recover those files?

What's your backup strategy?

Most DBs have non-blocking DB dumps. Almost all have replication. Both are valid mechanisms.

Some NoSQL stores use replication to address the problem, others seem to punt on it by using redundant data duplication across nodes. But unless your redundant/replica nodes are geographically co-located, it's not the same as being able to go back to a backup on catastrophic loss.

How do you know your replicas are working?

So you say, you don't care if your data gets corrupted or that you can't do live backups, because it all gets replicated to a safe server. Well, much like going back to tape only to discover that your back-up process hasn't actually backed up anything, do you have the tools to ensure that your replicas are up to date and didn't get the corruption replicated into them?

Do your sysadmins share your comfort level?

A lot of these production level and back-up related issues are not even something developers think about, because with the maturity of RDBMS' their maintenance and back-up are often tightly integrated into the sysadmin's processes. If you don't think you need to care about the above questions, chances are you have others doing it for you. And in that case, it's vital that your sysadmins are versed the in NoSQL tool you are choosing before you throw the operations requirements over the wall at them.

The tool you know

Maybe you have all those questions covered for your NoSQL tool of choice. Google, Facebook, LinkedIn do. But likely, you don't. Maybe you don't have them covered for any RDBMS that you know either. But here's the difference: These problems have been tackled in painstaking detail in thousands of RDBMS production environments. So, when you hit a wall with an RDBMS, chances are you can find an answer and get yourself out of that production mess.

The relative novelty and deployment size of most NoSQL solutions means you can't easily fall back on established production experience. Until you have that same certainty when, not if, you face problems in production, you can't really say that you objectively evaluated all choices and found NoSQL to be the superior solution to your problem.

You're an administrator, not THE Administrator

I set up a new dev machine last week and decided to give win7 a try. Most recent dev setup was using win2k8 server and it's still my favorite dev environment. Fast, unobtrusive, things just worked.

Win7 appeared to be a different story, reminding me of the evil days of Vista. I had expected it to be more like Win2k8 server, but it just wasn't. I was trying to be zen about the constant UAC nagging and just get used to the way it wanted me to work. But two days in, it just came to a head and after wasting countless hours trying to work within the security circus it set up, i was ready to pave the machine.

Here's just a couple of things that were killing me:

Can't save into Program Files from the web

Had to save into my documents then move it there. Worse, it told me i had to talk to an administrator about that. I am an administrator!

Can't unzip into Program Files

Same story as above.

Have to whitelist reserve Uri's for HttpListeners and you can't wildcard ports.

This was the final straw, since my unit tests create random port listeners so that the shutdown failures of a previous test doesn't hose the registration of the next.

All these things need administrator privileges. But wait, I am an administrator, so what's going on? It appears that being an administrator is more like being in the sudoers file on unix. I have the right to invoke commands in the context of an administrator, but my normal actions aren't. I tried to work around this with registry hacks, shortcuts set to run as administrator and so on, to try to get things to start-up with administrator privs by default, but Visual Studio 2k8 just refused to play along. You cannot set it up so that you can double-click on a solution and it launch the solution as administrator in Win7. And even if you start VS as administrator, you cannot drag&drop files to it since it's now running in a different context as Explorer.

And if you ask MS Connnect about this you'll find that like anything of value the issue has been closed as "By Design.". Ok, look buddy, just because you designed a horrible user experience doesn't mean the problem can just be dismissed.

But why was win2k8 so much better an experience, a nagging voice kept asking. Turns out that on win2k8, i just run as Administrator. Win7 never gave that option (and you have to do some cmdline foo to enable the account.) Being a unix guy as well, running dev in what is root, just felt distasteful. But distaste or not, it's the key for actually being able to do productive development work in windows. As soon as I became THE Administrator, instead of an administrator, all was smooth again.

Stupid lesson learned.

Duckpond: Lightweight duck-typing for C

Edit: Changed As to AsImplementationOf since it's an extension method on object and too likely to collide.

A while back I was talking about Interface Segregation and proposed using either LinFu's DynamicObject or delegate injection. While I played a bit more with delegate injection, in practical use delegate injection turned out to be rather ugly and not really improve readability.

So, I've come back to wanting to cast an object to an interface regardless of what interfaces that object implemented. I wanted this to be as simple and lightweight as possible, so rather than using a dynamic proxy framework, i simply rolled my own IL and wrote a pure proxy that does nothing but call the identical method on the class it wraps.

Introducing DuckPond

DuckPond is a very simple and focused library. It currently adds only a single extension method: object.AsImplementationOf<Interface>

Given a class Duck that implements a number of methods, including Quack:

public class Duck {
  public void Quack(double decibels) {
    ...
  }

  //... various other methods ...
}

we can easily cast Duck to a more limited interface that the class doesn't implement such as:

public interface IQuacker {
  void Quack(double decibels);
}
using the `object.AsImplementationOf<T>` extension method:

using Droog.DuckPond;

...

var quacker = new Duck().AsImplementationOf<IQuacker>();

That's all there is to it.

But is it fast?

Honestly, i don't know yet. I have not benchmarked the generated classes against virtual method dispatches or LinFu's and Castle's dynamic proxy. I assume it is, since unlike with dyanmic proxy, DuckPond doesn't use an interceptor. Instead it emits Intermediate Language for each call in the interface, dispatching the call against the wrapped instance's counterpart.

Try it, fork it, let me know what you think

The code is available now at GitHub: http://github.com/sdether/duckpond

Linq2MongoDB: Building a Linq Provider for MongDB

This weekend has been a hack-a-thon, trying to build a simple linq provider for MongoDB. I'm using Sam Corder, et al.'s excellent C# MongoDB Driver as the query pipeline, so my provider really is just a translator from Linq syntax to Mongo Document Query syntax. I call it a hack-a-thon, because it's my first linq provider attempt and, boy, is that query translator state machine ugly already. However, I am covering every bit of syntax with tests, so that once i understand it all better, i can rewrite the translator in a cleaner fashion.

My goals for this provider is to replace a document storage layer i've built for a new notify.me project using NHibernate against mysql. This is in no way a judgment against NHibernate. It just happens that for this project, my schema is a heavily denormalized json document database. While fluent NHibernate made it a breeze to let me map it into mysql, it's really an abuse of an RDBMS. It was a case of prototyping with what you know, but now it's time to evaluate whether a document database is the way to go.

Replacing existing NHibernate code does mean that, eventually, i want the provider to work with POCO entities and use a fully strong-typed query syntax. But that layer will be built on top of the string-key based version i'm building right now. The string-key based version will be the primary layer, so that you never loose any of the schema-less flexibility of MongoDB, unless you choose to.

Basic MongoDB queries

So, lacking an entity with named properties to map against, what does the syntax look like right now? First thing we need is an IQueryable<Document> which is created like this:

var mongo = new Mongo();
var queryable = mongo["db"]["collection"].AsQueryable();

Given the queryable, the queries can be built using the Document indexer like this:

var q = from d in queryable where (string)d["foo"] == "bar" select d;

The Document returns an object, which means a cast is unfortunately required on one side of the conditional. Alternatively, Equals, either the static or instance version, also works, alleviating the need for a cast:

var q = from d in queryable where Equals(d["foo"], "bar") select d;
// OR
var q = from d in queryable where d["foo"].Equals("bar") select d;

Better, but it's not as nice as operator syntax would be, if we could get rid of the casts..

As it turns out there is a number of query operators in MongoDB that don't have an equivalent syntax in Linq, so a helper class to generate query expression was already needed. The helper is instantiated via the Document extension method .Key(_key_), giving us the opportunity to overload operators for the various types recognized by MongoDB's BSON. This allows for the following conditional syntax:

var q = from d in queryable
        where d.Key("type") == "customer" &&
              d.Key("created") >= DateTime.Parse("2009/09/27")
              d.Key("status") != "inactive"
        select d;

IN and NOT IN

In addition to normal conditional operators, the query expression helper class also defines IN and NOT IN syntax:

var in = from d in queryable where d.Key("foo").In("bar", "baz") select d;

var notIn = from d in queryable where d.Key("foo").NotIn("bar", "baz") select d;

The helper will be the point of extension to support more of MongoDB's syntax, so that most query definitions will use the d.Key(_key_) syntax.

findOne, limit and skip

Linq has matching counter parts of MongoDB's findOne(), limit() and skip(), in First or FirstOrDefault, Take and Skip respectively, and the current version of Linq provider already supports them.

What's missing?

There is a lot in Linq that will likely never be supported, since MongoDB is not a relational DB. That means joins, sub-queries, etc. will not covered by the provider. Anything that does map to MongoDB's capabilities, though, will be added over time. The low hanging fruit are Count() and order by, with group by following thereafter.

Surprisingly, || (or conditionals) are not going to happen as fast, since aside from or type queries using the .In syntax, it is not directly supported by MongoDB. In order to perform || queries, the query has to be written as a javascript function, which would basically mean that as soon as a single || shows up in the where clause the query translato would have to rewrite all other conditions in javascript as well. So, that's a bit more on the nice to have end of the spectrum of priorities.

Ready to go!

I will most likely concentrate on the low hanging fruit and then work on the POCO query layer next, since my goal is to be able to try out MongoDB as an alternative to my NHibernate code.

All that said, the code described above works now and is ready for some test driving. It's currently only in my branch on github, but I hope it will make it into the master soon.

About Concurrent Podcast #3: Coroutines

Posted a new episode of the Concurrent Podcast over on the MindTouch developer blog. This time Steve and I delve into Coroutines, a programming pattern we use extensively in MindTouch 2009 and one that i'm also trying out as an alternative to my actor based Xmpp code in Notify.me.

Since there isn't a native coroutine framework in C#, we're using the one provided by MindTouch Dream. It's built on top of the .NET iterator pattern (i.e. IEnumerable and yield) and makes the assumption that all Coroutines are asynchronous methods using Dream's Result<T> object for coordinating the producer and consumer of a return values. Steve's previously blogged about Result. Since those posts there's also been a lot of performance improvements and capability improvements to Result committed to trunk, primarily providing robust cancellation with resource cleanup callbacks. For background on coroutines, you can also check out previous posts I'vee written.

The cool thing about asynchronous coroutines compared to an actor model is that call/response based actions can be written as a single linear block of code, rather than separate message handlers whose contiguous flow can only be determined by examining the message dispatcher. With a message dispatcher that can correlate message responses with suspended coroutines, sending and waiting for a message in a coroutine can be made to look like a method call without blocking the thread, which, especially with message passing concurrency, is vital, since a response isnn't in any way guaranteed to happen.

I'm due to write another post on how to use Dream's coroutine framework, but in the meantime i highly recommend checking out Dream from mindtouch's svn. Lot's of cool concurrency stuff in there. _trunk_is under heavy development, as we work towards Dream profile 2.0, but 1.7.0 is stable and production proven.

Composing remote and local linq queries

One of the cool things with Linq is that queries are composable, i.e. you can add further query constraints by selecting from an existing query. Nothing is executed until you try to read from the query. This allows IQueryable to compose all the added constraints and transform it into the underlying query structure, most commonly SQL.

However it does come with the pitfall that there are a lot of things legal in Linq expressions that will die at run time. This happens because an expression may not have an equivalent syntax in the transformed language, like calling a method as part of a where clause.

This does not mean that you can't use linq for the portion of the query that is not executable by the provider. As long as you know what expression is affected, you can use query composition to build a query that executes some part remotely and some part against the object model in memory.

Let's suppose we wish to execute the following query:

var q = from e in session.Queryable<Entry>()
    where e.Created > DateTime.Parse("2009/9/1")
        &&  e.Created < DateTime.Now
        && e.Tags.Contains('foo')
    select e;

But our query provider doesn't understand the extension method that allows us to check the list of Tags. In order for this query to work, that portion must be executed against the result set of the date range query. We could coerce the first portion to a list or array and then query that portion, but that would just force the date query to be materialized before we could prune the set. Instead we want to feed the stream of matching entries into a second query, composing a new query that contains both portions as a single query and won't access the database until we iterate over it.

To accomplish this I created an extension method that coerces the query into a sequence that yields each item as it is returned by the database query:

public static class LinqAdapter {
    public static IEnumerable<T> AsSequence<T>(this IEnumerable<T> enumerable) {
        foreach(var item in enumerable) {
            yield return item;
        }
    }
}

UPDATE: As Scott points out in the comments, my AsSequence just re-implements what is already available in Ling as AsEnumerable. So the above really just serves to explain how AsEnumerable defers execution to enumeration rather than query definition.

Anyway, AsSequence or AsEnumerable allows me to compose the query from server and local expressions like this:

var q = from e in session.Queryable<Entry>()
    where e.Created > DateTime.Parse("2009/9/1")
        &&  e.Created < DateTime.Now
    select e;
q = from e in q.AsSequence() where e.Tags.Contains('foo') select e;

When q is enumerated, the first expression is converted to SQL and executes against the database. Each item returned from the database is then fed into the second query, which checks its expression and yields the item to the caller, should the expression match. Since q.AsSequence() is used as part of query composition, it does not force the first expression to execute at the time of query definition as q.ToList() would. The additional benefit is that even when q.AsSequence() is executed, it never builds the entire result set in memory as a list to iterate over, but rather just streams each database query result item through its own expression evaluation.

Of course, this still have the performance implications of sending data across the wire and filtering it locally. However, this is not an uncommon problem when SQL alone cannot provide all the filtering. The benefit of this approach is reduced memory pressure on execution, better control when execution occurs and the ability to use Linq syntax to do the secondary filtering.

Concurrent Podcast and Producer/Consumer approaches

As usual, I've been blogging over on the MindTouch Developer blog, and since the topics i post about over there have a pretty strong overlap with what I'd post here, I figured i might as well start cross-posting about it here.

Aside from various technical posts, Steve Bjork and I have started recording a Podcast about concurrent programming. It's currently 2 episodes strong, with a third one coming soon. Information on past and future posts can always be found here.

Today's post on the MindTouch dev blog is about the producer/consumer pattern and how i moved from using dedicated workers with a blocking queue to using Dream's new ElasticThreaPool to dispatch work.

Writing a Tag Algebra compiler with Coco/R

This week, i was digging back into the Coco/R implemented parser of DekiScript, tracking down a bug, which turned out to not be in the parsing bits at all. It did, however, get me familiarized with Coco/R again. So I thought i'd give myself an hour to implement the parser for my Tag related boolean algebra with Coco/R. If i could pull it off, forget about the regex/state-machine approach i was considering.

Took me about 15 minutes to set up the classes to represent the intermediate AST and another 30 minutes for the grammar in Coco/R ATG format. After that I wrote a couple of unit tests to check that the parsing was right, only to realize that while AND and OR are left-to-right associative, NOT is right-to-left associative. Figuring out how to adjust the grammar for that actually took me another 10-15 minutes. But overall, I hit the one hour goal.

The Syntax Tree

Before tackling the grammar to parse, I needed to define data structures to represent the parsed syntax tree, which I'd later convert to executable code. The syntax is fairly simple:

(foo+bar)|(^foo+baz)

This can be represented by just 4 tree node types (with common parent TagExpression:

  • AndExpression
  • OrExpression
  • NotExpression
  • Tag

The parentheses are just tree structure artifacts, so are not represented in the AST.

The Coco/R grammar

I've broken the grammar up to discuss the various parts, but the below sections represent a single ATG file, the Coco/R grammar format.

COMPILER TagAlgebra

  public TagExpression Result = TagExpression.Empty;

The COMPILER defines the entrypoint PRODUCTIONfor the parser. The following lines. until the next grammar definition, are inserted into the generated Parser, and can be used to inject class fields, extra methods, etc. into the Parser source. The only thing I inserted was a field to hold the root of the AST and initialize it to empty.

IGNORECASE

This tells Coco/R that our grammar is case insensitive.

CHARACTERS
  tab = '\\t'.
  eol = '\\n'.
  cr = '\\r'.
  nbsp = '\\u00a0'. // 0xA0 = 'unbreakable space'
  shy = '\\u00ad'.  // 0xAD = 'soft-hyphen'
  letter = 'a'..'z'.
  digit = '0'..'9'.

CHARACTERS defines characters that the parser should recognize for matches.

TOKENS
  tag =
    letter { letter | digit | "_" | ":" }
    .

The only token in the grammar are tag, which is composed from the characters defined above and extra quoted characters.

IGNORE eol + cr + tab + nbsp + shy

IGNORE tells Coco/R what characters have no meaning in parsing the input.

Next come the PRODUCTIONS, i.e. the meat of the grammar. These are the rules for matching input and converting it into code. Coco/R is an LL(1) parser generator, i.e. the grammar must be parsable from Left to Right with Left-canonical derivations and one look-ahead symbol. We also cannot have a loop in our grammar, i.e all possible branches have to lead to a terminal via a unique set of production matches.

PRODUCTIONS

  TagAlgebra                      (. Result = TagExpression.Empty; .)
    =
    [ BinaryExpr<out Result> ]
    .

The first production, is the entry point which, again, sets the result to an empty AST, since the same instance of the parser can parse multiple expressions. It then specifies 0 or 1 BinaryExpr productions.

  BinaryExpr<out TagExpression expr>  (.  expr = null; TagExpression right = null; .)
    =
    NotExpr<out expr> {               (. bool and = false; .)
      (
        "|"
        | "+"                         (. and = true; .)
      )
      NotExpr<out right>              (. expr = and ? (TagExpression)new AndExpression(expr,right) : new OrExpression(expr,right); .)
    }
    .

BinaryExpr is used for both AND and OR expressions, since both take two sub-expressions and combine them with a single operator. The production specifies a left side of a NotExpr followed by an optional operator and another NotExpr. I.e. should our first match happen to not be a BinaryExpr, the parser can fall through to NotExpr return its result instead, without matching the optional production body.

  NotExpr<out TagExpression expr> (. expr = null; int notCount = 0; .)
    =
    {
      "^"                         (. notCount++; .)
    }
    (
      "(" BinaryExpr<out expr> ")"
      | tag                       (. expr = new Tag(t.val); .)
    )                             (. for(var i=0;i<notCount;i++) expr = new NotExpression(expr); .)
    .

END TagAlgebra.

NotExpr, just like BinaryExpr optionally matches on its operator, requiring only that the end of the operation it matches either a BinaryExpr enclosed by parentheses (i.e. not a circular match back into BinaryExpr, since it requires additional surrounding matches), or it matches a tag, the ultimate terminal in the grammar.

There is one tricky bit in this production, i.e. the NOT operator can match multiple times, which means, we need to accumulate the number of operator matches and build the chain of NOT expressions wrapping the current expression once we know how many, if any, matched.

What to do with the AST

The nice thing with Coco/R is that it adds no runtime dependency at all, building a fully self-contained Scanner and Parser. With these built, it is now possible to take Tag Algebra expressions and turn them into an executable tree of Func calls, as described in "Boolean Algebra for Tag queries".

The grammar could have been written to accumulate the unique tags and construct the Func tree right away, but the two benefits of going to an AST first, is that a)the AST can easily be rendered back into text form (even placing parentheses properly for expressions that previously had not parentheses), and b) the AST can easily be programatically composed with other expressions, or decomposed into sub-expressions, which can be used for caching and other efficiency operations.

I'll probably play with Irony next, but the "no runtime dependency" and existing familiarity made Coco/R the winner this time.

Hiding NHibernate with Linq

Been using NHibernate a lot recently, and just love having POCO data objects. This means that i can just hand out objects and manipulate them and then save them back to the DB without ever exposing my persistence model. For this purpose, I'd been using Data Access Objects to give me access to the objects. But like stored procedures, after a while you DAOs have this mess of Find methods on them that either have many overloads or, even worse, optionally nullable parameters. Never mind the acrobatics you jump through once you want to provide methods that query entities, based on some other entity, that's abstracted by its on DAO.

The option was to expose the NHibernate query api (talking ICriteria here, never touched HQL because the reason i went NHibernate in the first place is because i didn't want queries in strings), but that didn't taste right, so i added more methods on my DAO instead, hiding the criteria queries.

Once i started playing with the experimental Linq support, i didn't change my abstraction, just started using Linq instead of criteria inside the DAO. But last week, I finally broke down and just exposed session.Linq<T>(); as an IQueryable<T> on my DAO. For example my Account DAO, AccountService now simply looks like this:

public interface IAccountService
{
  Account CreateAccount(JObject json);
  Account GetAccount(int accountId);
  IQueryable<Account> Accounts { get; }
  void Save(Account account);
}

NHibernate is completely invisible, I have full query capabilities and even mocking has become easier, just returning a List<T>.AsQueryable() from my mock DAO. I was able to remove all my Find methods. I did leave the by Id accessor GetAccount in there, just because it's the predominant use case, and i didn't want to litter code with identical Linq queries.

I have to say, i like this approach a lot.

Teh Shiny

For technology geeks, one of the greatest enemies of productivity is teh Shiny, i.e. some algorithm, framework, language that is a possible solution to the problem at hand, but mostly attracts because of its dazzling awesomeness. When something seems like the cool way to do it, chances are you are about to step into a big pile of YAGNI.

Often, the simplest possible solution or, even more importantly, using what you know, rather than what appears to be the "best", is the way to go.

A good example is the boolean algebra parser for my tag queries. Once I realized I had a mini-DSL on my hands, my google fingers were refreshing my knowledge of Coco/R and evaluating Irony in order to build a parser to build me my AST. It took stepping back to realize that to get this to the prototype stage, some regex and a simple state machine could handle the parsing much more quickly (mostly because it avoided the tooling learning curve). It may not be the final solution, but it's one that works well enough to serve as a place holder until the rest of the tech proves out.

Don't get me wrong, this does not mean, hack up the simplest bit of code to get the job done. A straight path to the solution is likely to result in an unmaintainable mess. Certain standards of code quality need to be met, so that you can still test the result and be able to refactor it without starting from scratch. But if you meet that criteria, getting it done fast first, buys you the time to evaluate whether a) the problem you're solving is the right problem, b) the solution provided is sufficient for the intended use and c) whether the shiny tech really would improve the implementation.

Determining whether a path is just shiny or appropriate isn't always easy. One bit of tech that's been consuming a bit more time than I wish, is basing my persistence story on NHibernate, instead of rolling my own. The temptation to use what I know is strong, but fortunately (or unfortunately?), I have enough experience from rolling my own ORM and hand-coded SQL to know that down that road lies madness. So, be sure not to mistake "learning curve" for yagni.