Skip to content

2009

Linq2MongoDB: Building a Linq Provider for MongDB

This weekend has been a hack-a-thon, trying to build a simple linq provider for MongoDB. I'm using Sam Corder, et al.'s excellent C# MongoDB Driver as the query pipeline, so my provider really is just a translator from Linq syntax to Mongo Document Query syntax. I call it a hack-a-thon, because it's my first linq provider attempt and, boy, is that query translator state machine ugly already. However, I am covering every bit of syntax with tests, so that once i understand it all better, i can rewrite the translator in a cleaner fashion.

My goals for this provider is to replace a document storage layer i've built for a new notify.me project using NHibernate against mysql. This is in no way a judgment against NHibernate. It just happens that for this project, my schema is a heavily denormalized json document database. While fluent NHibernate made it a breeze to let me map it into mysql, it's really an abuse of an RDBMS. It was a case of prototyping with what you know, but now it's time to evaluate whether a document database is the way to go.

Replacing existing NHibernate code does mean that, eventually, i want the provider to work with POCO entities and use a fully strong-typed query syntax. But that layer will be built on top of the string-key based version i'm building right now. The string-key based version will be the primary layer, so that you never loose any of the schema-less flexibility of MongoDB, unless you choose to.

Basic MongoDB queries

So, lacking an entity with named properties to map against, what does the syntax look like right now? First thing we need is an IQueryable<Document> which is created like this:

var mongo = new Mongo();
var queryable = mongo["db"]["collection"].AsQueryable();

Given the queryable, the queries can be built using the Document indexer like this:

var q = from d in queryable where (string)d["foo"] == "bar" select d;

The Document returns an object, which means a cast is unfortunately required on one side of the conditional. Alternatively, Equals, either the static or instance version, also works, alleviating the need for a cast:

var q = from d in queryable where Equals(d["foo"], "bar") select d;
// OR
var q = from d in queryable where d["foo"].Equals("bar") select d;

Better, but it's not as nice as operator syntax would be, if we could get rid of the casts..

As it turns out there is a number of query operators in MongoDB that don't have an equivalent syntax in Linq, so a helper class to generate query expression was already needed. The helper is instantiated via the Document extension method .Key(_key_), giving us the opportunity to overload operators for the various types recognized by MongoDB's BSON. This allows for the following conditional syntax:

var q = from d in queryable
        where d.Key("type") == "customer" &&
              d.Key("created") >= DateTime.Parse("2009/09/27")
              d.Key("status") != "inactive"
        select d;

IN and NOT IN

In addition to normal conditional operators, the query expression helper class also defines IN and NOT IN syntax:

var in = from d in queryable where d.Key("foo").In("bar", "baz") select d;

var notIn = from d in queryable where d.Key("foo").NotIn("bar", "baz") select d;

The helper will be the point of extension to support more of MongoDB's syntax, so that most query definitions will use the d.Key(_key_) syntax.

findOne, limit and skip

Linq has matching counter parts of MongoDB's findOne(), limit() and skip(), in First or FirstOrDefault, Take and Skip respectively, and the current version of Linq provider already supports them.

What's missing?

There is a lot in Linq that will likely never be supported, since MongoDB is not a relational DB. That means joins, sub-queries, etc. will not covered by the provider. Anything that does map to MongoDB's capabilities, though, will be added over time. The low hanging fruit are Count() and order by, with group by following thereafter.

Surprisingly, || (or conditionals) are not going to happen as fast, since aside from or type queries using the .In syntax, it is not directly supported by MongoDB. In order to perform || queries, the query has to be written as a javascript function, which would basically mean that as soon as a single || shows up in the where clause the query translato would have to rewrite all other conditions in javascript as well. So, that's a bit more on the nice to have end of the spectrum of priorities.

Ready to go!

I will most likely concentrate on the low hanging fruit and then work on the POCO query layer next, since my goal is to be able to try out MongoDB as an alternative to my NHibernate code.

All that said, the code described above works now and is ready for some test driving. It's currently only in my branch on github, but I hope it will make it into the master soon.

About Concurrent Podcast #3: Coroutines

Posted a new episode of the Concurrent Podcast over on the MindTouch developer blog. This time Steve and I delve into Coroutines, a programming pattern we use extensively in MindTouch 2009 and one that i'm also trying out as an alternative to my actor based Xmpp code in Notify.me.

Since there isn't a native coroutine framework in C#, we're using the one provided by MindTouch Dream. It's built on top of the .NET iterator pattern (i.e. IEnumerable and yield) and makes the assumption that all Coroutines are asynchronous methods using Dream's Result<T> object for coordinating the producer and consumer of a return values. Steve's previously blogged about Result. Since those posts there's also been a lot of performance improvements and capability improvements to Result committed to trunk, primarily providing robust cancellation with resource cleanup callbacks. For background on coroutines, you can also check out previous posts I'vee written.

The cool thing about asynchronous coroutines compared to an actor model is that call/response based actions can be written as a single linear block of code, rather than separate message handlers whose contiguous flow can only be determined by examining the message dispatcher. With a message dispatcher that can correlate message responses with suspended coroutines, sending and waiting for a message in a coroutine can be made to look like a method call without blocking the thread, which, especially with message passing concurrency, is vital, since a response isnn't in any way guaranteed to happen.

I'm due to write another post on how to use Dream's coroutine framework, but in the meantime i highly recommend checking out Dream from mindtouch's svn. Lot's of cool concurrency stuff in there. _trunk_is under heavy development, as we work towards Dream profile 2.0, but 1.7.0 is stable and production proven.

Composing remote and local linq queries

One of the cool things with Linq is that queries are composable, i.e. you can add further query constraints by selecting from an existing query. Nothing is executed until you try to read from the query. This allows IQueryable to compose all the added constraints and transform it into the underlying query structure, most commonly SQL.

However it does come with the pitfall that there are a lot of things legal in Linq expressions that will die at run time. This happens because an expression may not have an equivalent syntax in the transformed language, like calling a method as part of a where clause.

This does not mean that you can't use linq for the portion of the query that is not executable by the provider. As long as you know what expression is affected, you can use query composition to build a query that executes some part remotely and some part against the object model in memory.

Let's suppose we wish to execute the following query:

var q = from e in session.Queryable<Entry>()
    where e.Created > DateTime.Parse("2009/9/1")
        &&  e.Created < DateTime.Now
        && e.Tags.Contains('foo')
    select e;

But our query provider doesn't understand the extension method that allows us to check the list of Tags. In order for this query to work, that portion must be executed against the result set of the date range query. We could coerce the first portion to a list or array and then query that portion, but that would just force the date query to be materialized before we could prune the set. Instead we want to feed the stream of matching entries into a second query, composing a new query that contains both portions as a single query and won't access the database until we iterate over it.

To accomplish this I created an extension method that coerces the query into a sequence that yields each item as it is returned by the database query:

public static class LinqAdapter {
    public static IEnumerable<T> AsSequence<T>(this IEnumerable<T> enumerable) {
        foreach(var item in enumerable) {
            yield return item;
        }
    }
}

UPDATE: As Scott points out in the comments, my AsSequence just re-implements what is already available in Ling as AsEnumerable. So the above really just serves to explain how AsEnumerable defers execution to enumeration rather than query definition.

Anyway, AsSequence or AsEnumerable allows me to compose the query from server and local expressions like this:

var q = from e in session.Queryable<Entry>()
    where e.Created > DateTime.Parse("2009/9/1")
        &&  e.Created < DateTime.Now
    select e;
q = from e in q.AsSequence() where e.Tags.Contains('foo') select e;

When q is enumerated, the first expression is converted to SQL and executes against the database. Each item returned from the database is then fed into the second query, which checks its expression and yields the item to the caller, should the expression match. Since q.AsSequence() is used as part of query composition, it does not force the first expression to execute at the time of query definition as q.ToList() would. The additional benefit is that even when q.AsSequence() is executed, it never builds the entire result set in memory as a list to iterate over, but rather just streams each database query result item through its own expression evaluation.

Of course, this still have the performance implications of sending data across the wire and filtering it locally. However, this is not an uncommon problem when SQL alone cannot provide all the filtering. The benefit of this approach is reduced memory pressure on execution, better control when execution occurs and the ability to use Linq syntax to do the secondary filtering.

Concurrent Podcast and Producer/Consumer approaches

As usual, I've been blogging over on the MindTouch Developer blog, and since the topics i post about over there have a pretty strong overlap with what I'd post here, I figured i might as well start cross-posting about it here.

Aside from various technical posts, Steve Bjork and I have started recording a Podcast about concurrent programming. It's currently 2 episodes strong, with a third one coming soon. Information on past and future posts can always be found here.

Today's post on the MindTouch dev blog is about the producer/consumer pattern and how i moved from using dedicated workers with a blocking queue to using Dream's new ElasticThreaPool to dispatch work.

Writing a Tag Algebra compiler with Coco/R

This week, i was digging back into the Coco/R implemented parser of DekiScript, tracking down a bug, which turned out to not be in the parsing bits at all. It did, however, get me familiarized with Coco/R again. So I thought i'd give myself an hour to implement the parser for my Tag related boolean algebra with Coco/R. If i could pull it off, forget about the regex/state-machine approach i was considering.

Took me about 15 minutes to set up the classes to represent the intermediate AST and another 30 minutes for the grammar in Coco/R ATG format. After that I wrote a couple of unit tests to check that the parsing was right, only to realize that while AND and OR are left-to-right associative, NOT is right-to-left associative. Figuring out how to adjust the grammar for that actually took me another 10-15 minutes. But overall, I hit the one hour goal.

The Syntax Tree

Before tackling the grammar to parse, I needed to define data structures to represent the parsed syntax tree, which I'd later convert to executable code. The syntax is fairly simple:

(foo+bar)|(^foo+baz)

This can be represented by just 4 tree node types (with common parent TagExpression:

  • AndExpression
  • OrExpression
  • NotExpression
  • Tag

The parentheses are just tree structure artifacts, so are not represented in the AST.

The Coco/R grammar

I've broken the grammar up to discuss the various parts, but the below sections represent a single ATG file, the Coco/R grammar format.

COMPILER TagAlgebra

  public TagExpression Result = TagExpression.Empty;

The COMPILER defines the entrypoint PRODUCTIONfor the parser. The following lines. until the next grammar definition, are inserted into the generated Parser, and can be used to inject class fields, extra methods, etc. into the Parser source. The only thing I inserted was a field to hold the root of the AST and initialize it to empty.

IGNORECASE

This tells Coco/R that our grammar is case insensitive.

CHARACTERS
  tab = '\\t'.
  eol = '\\n'.
  cr = '\\r'.
  nbsp = '\\u00a0'. // 0xA0 = 'unbreakable space'
  shy = '\\u00ad'.  // 0xAD = 'soft-hyphen'
  letter = 'a'..'z'.
  digit = '0'..'9'.

CHARACTERS defines characters that the parser should recognize for matches.

TOKENS
  tag =
    letter { letter | digit | "_" | ":" }
    .

The only token in the grammar are tag, which is composed from the characters defined above and extra quoted characters.

IGNORE eol + cr + tab + nbsp + shy

IGNORE tells Coco/R what characters have no meaning in parsing the input.

Next come the PRODUCTIONS, i.e. the meat of the grammar. These are the rules for matching input and converting it into code. Coco/R is an LL(1) parser generator, i.e. the grammar must be parsable from Left to Right with Left-canonical derivations and one look-ahead symbol. We also cannot have a loop in our grammar, i.e all possible branches have to lead to a terminal via a unique set of production matches.

PRODUCTIONS

  TagAlgebra                      (. Result = TagExpression.Empty; .)
    =
    [ BinaryExpr<out Result> ]
    .

The first production, is the entry point which, again, sets the result to an empty AST, since the same instance of the parser can parse multiple expressions. It then specifies 0 or 1 BinaryExpr productions.

  BinaryExpr<out TagExpression expr>  (.  expr = null; TagExpression right = null; .)
    =
    NotExpr<out expr> {               (. bool and = false; .)
      (
        "|"
        | "+"                         (. and = true; .)
      )
      NotExpr<out right>              (. expr = and ? (TagExpression)new AndExpression(expr,right) : new OrExpression(expr,right); .)
    }
    .

BinaryExpr is used for both AND and OR expressions, since both take two sub-expressions and combine them with a single operator. The production specifies a left side of a NotExpr followed by an optional operator and another NotExpr. I.e. should our first match happen to not be a BinaryExpr, the parser can fall through to NotExpr return its result instead, without matching the optional production body.

  NotExpr<out TagExpression expr> (. expr = null; int notCount = 0; .)
    =
    {
      "^"                         (. notCount++; .)
    }
    (
      "(" BinaryExpr<out expr> ")"
      | tag                       (. expr = new Tag(t.val); .)
    )                             (. for(var i=0;i<notCount;i++) expr = new NotExpression(expr); .)
    .

END TagAlgebra.

NotExpr, just like BinaryExpr optionally matches on its operator, requiring only that the end of the operation it matches either a BinaryExpr enclosed by parentheses (i.e. not a circular match back into BinaryExpr, since it requires additional surrounding matches), or it matches a tag, the ultimate terminal in the grammar.

There is one tricky bit in this production, i.e. the NOT operator can match multiple times, which means, we need to accumulate the number of operator matches and build the chain of NOT expressions wrapping the current expression once we know how many, if any, matched.

What to do with the AST

The nice thing with Coco/R is that it adds no runtime dependency at all, building a fully self-contained Scanner and Parser. With these built, it is now possible to take Tag Algebra expressions and turn them into an executable tree of Func calls, as described in "Boolean Algebra for Tag queries".

The grammar could have been written to accumulate the unique tags and construct the Func tree right away, but the two benefits of going to an AST first, is that a)the AST can easily be rendered back into text form (even placing parentheses properly for expressions that previously had not parentheses), and b) the AST can easily be programatically composed with other expressions, or decomposed into sub-expressions, which can be used for caching and other efficiency operations.

I'll probably play with Irony next, but the "no runtime dependency" and existing familiarity made Coco/R the winner this time.

Hiding NHibernate with Linq

Been using NHibernate a lot recently, and just love having POCO data objects. This means that i can just hand out objects and manipulate them and then save them back to the DB without ever exposing my persistence model. For this purpose, I'd been using Data Access Objects to give me access to the objects. But like stored procedures, after a while you DAOs have this mess of Find methods on them that either have many overloads or, even worse, optionally nullable parameters. Never mind the acrobatics you jump through once you want to provide methods that query entities, based on some other entity, that's abstracted by its on DAO.

The option was to expose the NHibernate query api (talking ICriteria here, never touched HQL because the reason i went NHibernate in the first place is because i didn't want queries in strings), but that didn't taste right, so i added more methods on my DAO instead, hiding the criteria queries.

Once i started playing with the experimental Linq support, i didn't change my abstraction, just started using Linq instead of criteria inside the DAO. But last week, I finally broke down and just exposed session.Linq<T>(); as an IQueryable<T> on my DAO. For example my Account DAO, AccountService now simply looks like this:

public interface IAccountService
{
  Account CreateAccount(JObject json);
  Account GetAccount(int accountId);
  IQueryable<Account> Accounts { get; }
  void Save(Account account);
}

NHibernate is completely invisible, I have full query capabilities and even mocking has become easier, just returning a List<T>.AsQueryable() from my mock DAO. I was able to remove all my Find methods. I did leave the by Id accessor GetAccount in there, just because it's the predominant use case, and i didn't want to litter code with identical Linq queries.

I have to say, i like this approach a lot.

Teh Shiny

For technology geeks, one of the greatest enemies of productivity is teh Shiny, i.e. some algorithm, framework, language that is a possible solution to the problem at hand, but mostly attracts because of its dazzling awesomeness. When something seems like the cool way to do it, chances are you are about to step into a big pile of YAGNI.

Often, the simplest possible solution or, even more importantly, using what you know, rather than what appears to be the "best", is the way to go.

A good example is the boolean algebra parser for my tag queries. Once I realized I had a mini-DSL on my hands, my google fingers were refreshing my knowledge of Coco/R and evaluating Irony in order to build a parser to build me my AST. It took stepping back to realize that to get this to the prototype stage, some regex and a simple state machine could handle the parsing much more quickly (mostly because it avoided the tooling learning curve). It may not be the final solution, but it's one that works well enough to serve as a place holder until the rest of the tech proves out.

Don't get me wrong, this does not mean, hack up the simplest bit of code to get the job done. A straight path to the solution is likely to result in an unmaintainable mess. Certain standards of code quality need to be met, so that you can still test the result and be able to refactor it without starting from scratch. But if you meet that criteria, getting it done fast first, buys you the time to evaluate whether a) the problem you're solving is the right problem, b) the solution provided is sufficient for the intended use and c) whether the shiny tech really would improve the implementation.

Determining whether a path is just shiny or appropriate isn't always easy. One bit of tech that's been consuming a bit more time than I wish, is basing my persistence story on NHibernate, instead of rolling my own. The temptation to use what I know is strong, but fortunately (or unfortunately?), I have enough experience from rolling my own ORM and hand-coded SQL to know that down that road lies madness. So, be sure not to mistake "learning curve" for yagni.

Boolean Algebra for Tag queries

Currently working on a tag query engine and couldn't find anything all that useful in the published approaches. I want to do arbitrary boolean algebras against a set of tags in a database, which seems to be out of scope of SQL approaches. All the various tagging schemas out there reduce to either just AND or just OR queries, but not complex logic. However, I want to be able to do something like:

(foo+bar)|(foo+baz)|(bar+^baz)

If there is a way to do this with SQL, i'd love to know. But the way i look at it, i really have to fetch all tags for each item and then do apply that formula to the list of tags on the item.

But let's break down the matching problem itself into something i can execute. Let's assume I've got a simple parser that can turn the above into an AST. Really, i can decompose any variation into three operations, AND(a,b), OR(a,b) and NOT(a). And I can represent those with some simple Func<> definitions:

Func<bool, bool, bool> AND = (a, b) => a && b;
Func<bool, bool, bool> OR = (a, b) => a || b;
Func<bool, bool> NOT = (a) => !a;

Assuming that i have boolean tokens for foo, bar and baz, the expressions becomes:

OR(AND(foo, bar), OR(AND(foo, baz), AND(bar,NOT(baz))))

Now, the above expression can be expressed as a function that takes three booleans describing the presence of the mentioned tags, ignoring any other tags that the item has, returning a boolean indicating a successful match. In C# that expression would look like this:

Func<bool[], bool> f = x => OR(AND(x[0], x[1]), OR(AND(x[0], x[2]), AND(x[1],NOT(x[2]))));

Next we need to generate this boolean map from the list of tags on the item. Assuming that the tag list is a list of strings, we can define an extension methods on IEnumerable to generate the boolean map like this:

public static bool[] GetBinaryMap(this IEnumerable<string> tags, string[] mask) {
    var map = new bool[mask.Length];
    foreach(var x in tags) {
        for(var i = 0; i < mask.Length; i++) {
            if(x == mask[i]) {
                map[i] = true;
            }
        }
    }
    return map;
}

And with this we can define a linq query that will return us all matching items:

var mask = new[] { "foo", "bar", "baz"};
Func<bool[], bool> f = x => OR(AND(x[0], x[1]), OR(AND(x[0], x[2]), AND(x[1],NOT(x[2]))));
var match = from item in items
            where f(item.Tags.GetBinaryMap(mask))
            select item;

Clearly this is isn't the fastest executing query, since we first had to create our items, each item in which has a collection of tags. But there is a lot of optimizations left on the table here, such as using our tag mask to pre-qualify items, breaking down the AST into sub-matches that could be used against a cache to find items, etc.

But at least we have a fairly simple way to take complex boolean algebra on tags and convert them into something that we can evaluate generically

Setting up mocks for an inner autofac container

This is definitely an edge case testing scenario, so i don't know how useful this utility class is in general, but i thought it was kinda fun deferred execution stuff, so why not post it?

Here's my scenario. I've built some Dream REST services that i've set up to create an inner Autofac container per request to create per request instances of objects -- things like NHibernate ISession and other disposable resources that only make sense in a per request context.

Now i'm writing my unit tests around these services, and need to provide mocks for these inner container created objects. I should also mention that to test the services, i am firing up a DreamHost, since the services can only be tested in the context of the full pipeline. Yeah, i know, smells a bit functional, but that's what I have to work with right now. And i need these objects to be ContainerScoped (i.e. per inner container singletons), so that multiple Resolve's return the same instance, but still return different instances on multiple requests. Ok, ok, i know the tests are doing too much... Like i said, this is an edge case. It's not strictly a unit test, but i still want coverage on this code. Getting around this would require refactoring of code that's not part of my project, so there you go.

What I want to do is set up the mock for the inner container instance on creation, which doesn't happen until i've handed over execution control to the Act part of the test. This lead me to create a factory that provides a hook for setting up the mock on creation of the mock:

public class DelegatedMoqFactory<T> where T : class
{
 private Action<Mock<T>, IContext> setupCallback;
 private Mock<T> mock;

 public Mock<T> CurrentMock { get { return mock; } }

 public T CreateInstance(IContext container)
 {
  mock = new Mock<T>();
  if (setupCallback != null)
  {
   setupCallback(mock, container);
  }
  return mock.Object;
 }

 public void OnResolve(Action<Mock<T>, IContext> setupCallback)
 {
  this.setupCallback = setupCallback;
 }
}

A sample autofac wire-up looks like this:

builder.RegisterGeneric(typeof(DelegatedMoqFactory<>));
builder.Register(c => c.Resolve<DelegatedMoqFactory<IAuthService>>().CreateInstance(c))
    .ContainerScoped();

With a test setup of the IAuthService being done like this:

container.Resolve<DelegatedMoqFactory<IAuthService>>()
    .OnResolve(m => m.Setup(x =>
        x.AuthenticateAndGetAccountId("authtoken")).Returns(1234);

The open generic of DelegateMoqFactory is registered with default scope, since i want it to exist outside the inner scope, so that i can resolve it to wire up my expectations for the mock. Then on the first access for IAuthService inside the inner scope, the DelegateMoqFactory creates the mock and calls my OnResolve callback to set up the mock.

The reason there is also a CurrentMock accessor is so that I can do verification on the mock after the inner container has gone out of scope, like this:

container.Resolve<DelegatedMoqFactory<IAuthService>>()
    .CurrentMock.Verify(x =>
        x.CreateAuthToken(It.IsAny<IAccount>()), Times.Never());

This class should be useful whenever you are testing some code that internally creates an inner container and scoping the objects usually created under ContainerScope as default scope doesn't work (likely because there's multiple inner containers). We still get per inner container instances, but get to wire them up with deferred setups that don't come into play until the mocks are actually pulled from the inner container.

Moq rocks

Ok, so i'm not proud of it, but i've been a hold-out on mocking frameworks for a while. With the auto-gen of interfaces that resharper gives me, i'd just gotten pretty fast at rolling my own mocks. Once or twice a year, i'd dip my toe into a mocking framework, find its syntax frustrating and rather than get a good used to it, I'd soon find myself replacing NotImplementedException in a stub for yet another custom Mock object.

My reasoning was that if i can roll my mocks just as fast as wiring up a mock, then why bother with the dependency. And I thought I wasn't really causing myself too much extra work.

In the meantime, I even wrote a simple Arrange/Act/Assert mocking harness for Dream, so i could better test REST services. So, it's not like i didn't believe in the benefits of mocking harnesses.

Well, for the last couple of weeks, I've been using Moq and it's pretty much killed off all my desire to roll my own. I'm generally a huge fan of lambdas and have gotten used thinking in expressions. Although even with that, I wasn't able to get comfortable with the latest Rhino.Mocks. Probably just me. But from the very first attempt, Moq worked like i think and I was up and running.

var mock = new Mock();
mock.Setup(x=> x.Bar).Returns("bar").AtMostOnce();
var foo = mock.Object;
Assert.AreEqual("bar",foo.Bar);

I'm a convert!