Skip to content

Iloggable

Log4Net filtering by logger

Since i keep overwriting my App.Config with revision control configs and promptly forgetting how to set up filters, I figured i might was well write a brief article on filters here, so i have a place to look it up next time :)

My basic tenet with logging is that lots of debug statements is good. Now some may say that it just gets too noisy after a while, so don't put them in unless you need to debug. The problem is that you usually don't know when you'll need to debug, and if the code is deployed on a server or worse with a customer, generating a new release so they can run the debug is a burden you shouldn't have to shoulder. And commenting out log statements (or even conditional compiles) are a code smell reeking of Console.Writeline debugging

But it does get noisy! And noisy also means slow! However with log4net, noisy and performance degradation are non-arguments, since aside from levels, it has excellent filtering, which not only reduces the noise, but also cuts out 99% of the logging overhead. Worst case debugging example I've had was tracking down behavior in the motion control software for Full Motion Racing. The physics calculations in this software ran between 60Hz and 100Hz. When i added debug logging in that physics loop, the rate dropped down to about 20Hz because of I/O overhead, and this was with either RollingFile or Udp appenders. Needless to say, motion became jerky and unusable for a rider. But I got the debug data i needed. Disabling those logging statements with filters rather than removing left no appreciable degradation in the performance of the physics loop.

So, again, lots of debug logging == good. Because when you need that data, you need that data. But you may want to ship your code with a log4net configuration that pre-filters the loggers you know to be noisy, so that a user turning on debug logging doesn't overwhelm them.

How to filter

The basic deal with log4net filters is that they are applied in order and the first filter that matches short-circuits the matching logger. I.e. if the first filter is a DenyAllFilter, nothing else will even be considered, since it matches all loggers. That means there are generally two approaches to filtering, whitelisting and blacklisting. It also means that if you match a logger and a subsequent filter would remove that logger, the subsequent filter is never reached, since consideration of the filter chain stops at the first match

Whitelisting by logger

<filter type="log4net.Filter.LoggerMatchFilter">
  <loggerToMatch value="Only.Logger.To.Match" />
</filter>
<filter type="log4net.Filter.DenyAllFilter" />

LoggerMatchFilter filters default to acceptOnMatch being true, i.e. if omitted, the filter is a accepts (includes in logging) on match. The above will only emit logging statements for the Only.Logger.To.Match logger, since all others will hit the DenyAllFilter and be excluded.

Blacklisting by logger

<filter type="log4net.Filter.LoggerMatchFilter">
  <loggerToMatch value="Logger.To.Filter.Out" />
  <acceptOnMatch value="false" />
</filter>

This filter will show all logging statements except those for Logger.To.Filter.Out.

LoggerMatchFilter also matches on partial namespaces, which is very useful when you have a noisy namespace, but one logger in that namespace that you do want in your logs such as:

<filter type="log4net.Filter.LoggerMatchFilter">
  <loggerToMatch value="Noisy.Namespace.But.Important" />
</filter>
<filter type="log4net.Filter.LoggerMatchFilter">
  <loggerToMatch value="Noisy.Namespace" />
  <acceptOnMatch value="false" />
</filter>

With these filters, all of Noisy.Namespace.* will be filtered out, except for Noisy.Namespace.But.Important.

Filters are your friend

So, don't get locked into thinking that your choice for verbosity lies only in log levels and once committed to a level it's either all the noise or none of it. But to keep things sane, pre-populate your config with filters, because you are the one that knows best which loggers are of general use and which are special case only.

Blogging on MindTouch Dev Blog

Once again, there's been extended silence over here. I have several article drafts that keep getting the short end of my time in favor of coding. In the meantime, I have blogged a couple of article's over on the MindTouch Dev blog.

In general, I've just been spending a lot of time working on RESTful services, both for MindTouch and designing the new Notify.me REST API.

How I became a SOLID advocate and didn't even know it

There's been a lot of chatter of late about SOLID. It started with Uncle Bob talking on a couple of podcasts about the SOLID principles, but it really got the chatter going when Joel Spolsky and Jeff Atwood started talking smack about Uncle Bob on the Stackoverflow podcast. Since then battle lines seem to have been drawn between the TDD/SOLID folks and those who finally found a champion fighting to get them out from under the pattern yoke. Ok, that's a bunch of hyperbole, but it seems that the main objection to SOLID seems to be that having a list of Principles like that just feels bureaucratic and dogmatic, which rubs free thinking developers the wrong way. Since I've been practicing SOLID longer than I've been aware of, I wanted to walk through how I got here and to illustrate that the principles espoused by Uncle Bob are not a yoke, but rather helpful guidelines that will save you a lot of grief down the line.

The overall goal of development should be delivering software that solves the stated problem. Beyond that I do have some personal guiding principles for programming, i.e. what I personally want to get out of it once the raison d'__ĂȘtre is accomplished:

  1. I want to learn new things rather than maintain old things
  2. I never want to have to fix the same bug twice
  3. I don't want to be prevented from doing something better by legacy decisions

To me, 1) means writing code that's easy to maintain, so that maintenance does not become a time suck interfering with new things, 2) means that i protect myself from regressions and 3) means that my code should allow me to refactor it without screwing up 1) or 2). So far that seems pretty non-controversial.

As I go on, the common theme will be testing, not because testing is some higher goal and end in itself, but because testing, in my experience, let's me prove that code does what it claims and I didn't break anything else by the addition/change. For those who think that writing tests is a lot of tedium that only leads to test maintenance rather than code maintenance, I can only respond "you're probably doing it wrong" and address that statement under Pain points of TDD below.

Doesn't QA test code?

In the early days of the web, testing was what the programmer did to make sure things didn't break and then you relied on the customer telling you if it didn't work correctly -- the wild west days of CGI scripts. Once I started at MP3.com, testing became more refined via QA. Sure, everybody tested their web apps before handing them off to QA (or at least they should), but there wasn't really any formalized testing on the development side. It was all manual functional testing of firing up the app, trying out the things that should work and looking at logs. QA was responsible for test plans, regression tests, etc.

Early on I switched from web apps to running the databases and with that came dealing with the pain of maintaining schemas when everyone had raw SQL in strings throughout their code. So I set out to write a DB API. Being an OO geek, it quickly morphed from an API into an ORM instead, which abstracted the DB and built the SQL on demand. This gave the DB group more freedom to refactor the database as needed without having to have every developer track down their SQL.

Developing the ORM did mean that I was now out of the QA loop, since my deliverables went to developers and had to work long before QA ever got involved. So I developed test suites that I could run from the shell whenever i changed something. These tests gave me confidence that I didn't just break live apps with code for a new app.

Part of having these type of tests, however, was a giant WTF in itself. Why was I constantly risking the codebase by futzing around in the guts? Yes, the ORM suffered horribly from fragile baseclass problems and I had designed myself into a number of corners that could only be addressed by modifying the base. Learning this lesson, I spent a lot more time trying to build object hierarchies to provide the proper hooks to let subclasses extend the functionality without affecting or overriding the base functionality. Little did I know that I had started practicing SOLID's O, or the Open/Closed Principle (OCP).

Unit tests, but not really

When I started at Printable Technologies a number years later, I became part of an effort to migrate the existing application from ASP to ASP.NET. Like most legacy ASP applications it was the usual single code file per page mixing data access, business logic and html rendering. It was something we did not want to repeat. We set out to separate our logical layers carefully so that we would get greater re-use and transparency of what was going on in the application. I wanted to start off on the right foot and played around with NUnit to try out unit testing our new code base. This was before TestDriven.Net or similar tools for integration into the IDE. But at least it gave me an automated test suite, rather than a series of console apps. I thought, "Now I'm doing unit testing!".

Except, like many test adoptees, I really wasn't. I was doing functional tests with a test running harness. I was hitting test databases to check my DB Abstraction Layer, and as you moved up the object hierarchy, the graphs of dependent code supporting the code to be tested got deeper and deeper. Testing something that was at the front end, really tested all pieces beneath it. It certainly gave us good test coverage, but the tests were fragile, took a lot time to write, and it was often difficult to dig out what actually failed. That didn't seem right and made me wonder if this unit testing thing was really so great. But the test coverage did help to achieve my personal guiding principles, so I wasn't ready to give up on it just yet. I just need to work through the pain points.

Towards actual unit testing

The two major problems we had in our tests were that each test really tested many things at once and that the setup to get to test running was tedious and pulled in too many dependencies.

The first was a problem with our class design. We needed to break classes up into smaller functional pieces that each could be tested before testing the whole. While testability was the driving force for this change, it requiring this change was really a symptom of how badly coupled things were, i.e. that the design was flawed. This is a pattern in testing that has since repeated itself many times: If your test is fragile or difficult, it's generally the fault of the design of the code to be tested not the testing process.

There were a bunch of monolithic classes that did lots of things at once, which meant a test failure could be one of a hundred things. I started to break up classes into smaller pieces, each dedicated to one functional area. Now I could take those helper classes that the main class was composed of and test them independently. When the composite broke but none of the components did, I knew where to look for failure. This compartmentalization just happens to be Single Responsibility Principle (SRP).

So far so good, but our second problem still dogged us. Tests were still annoying to write the further you got into business logic, since everything built on the supporting infrastructure. I had heard about mocking and started looking into it hoping for some magic bullet that could just create me fakes (which had been easy back in the perl days). This was before TypeMock hit the scene, so I couldn't create a fake version of my concrete type. It was either making everything virtual (yuck) or using interfaces instead of concrete classes. Interfaces won out, but because I had yet to discover the D in SOLID, introducing a lot of interfaces also led us to a pattern that itself became a major pain point. This pattern was the use of singletons and the static factory methods, both well meaning static accessors to get around the inability of new'ing up an interface. But before realizing this separate morrass, using interfaces had lead to using the L, Liskov Substitution and I, Interface Segregation principles.

Mocking out classes with interfaces exposed a couple of places where we had an abstract baseclass and code accepting the base class using typeof() to determine what class was actually provided. Well, with an interface being passed in instead of the abstract baseclass, the typeof() logic still worked, until the first test with a mock object was run. That failure illustrated what a bad idea that bit of code was. If we say we require an object implementing an interface, any object implementing that interface should work, and that right there is the Liskov Substitution Principle (LSP). Making sure that our interfaces really represented the required functionality enabled mocking and cleaned out some inappropriate knowledge embedded in code.

Another aspect of mocking (rolling mocks by hand rather than using a mocking framework) was that lazyness dictated that you didn't want to implement a lot of things just to get a test working. So large interfaces got widdled down to just the methods required by the object taking in that interface. And that happens to be the Interface Segregation Principle (ISP).

A brand new pain: wiring up lots of SRP objects

Many of the above principles were only partially applied because they imposed a new pain and a whole new set of plumbing code that was tedious to write and maintain. The issue with separation of concerns and abstracting those concerns with interfaces was two-fold:

  1. You can't new up an interface, so you needed factories everywhere
  2. Suddenly half the code seemed to be plumbing to wire up increasingly complex object graphs.

As I said, a lot of this was dealt with via Singleton's and static factory methods. Both are really just degenerate implementations of the Service Locator Pattern, but we weren't even aware of that. Since this plumbing had it's own set of pain points, we often skipped the abstractions unless we really needed them to keep life simpler. Generally that meant we paid for that convenience in maintenance debt.

When I started writing code for Full Motion Racing, like every new project, I wanted to take the lessons learned from Printable and avoid the pains I had come across. I once again had need for object graphs that required access to singleton type service objects, but wanted to avoid statics as much as I could because of previous experience. I built a repository of objects that I could stuff instances into, providing me a Service Locator. Looking more into how other people were doing this, I came across talk about service locator still being an inappropriate coupling, since it itself is a dependency that had nothing to do with the responsibility of the consuming objects. Instead, services should be passed in at construction time whenever possible. Wow, really? That just seemed to take the pain of wiring up object graphs to unprecedented heights. That just couldn't be how people were writing their code.

Wanting to understand how this way of building decoupled systems could actually work in the real world, I learned about the D of SOLID, or the Dependency Inversion Principle (DIP) also (and maybe more accurately) known as Inversion of Control. In my opinion, DIP may be the last principle mentioned, but in many ways it is the enabling plumbing without which the remaining principles are all well in theory but often feel worse than the disease they aim to cure.

Agile in action

Early use of IoC for Full Motion Racing still relied on a singleton container that factory classes could use to create their dependencies for creating transient objects. Only over time did I learn to trust the container to build up all my objects for me, and learned how to register factories to support lifestyles other than singleton via the container.

It wasn't until I started at Bunkspeed, that I really saw IoC used properly and was able to reap the true benefits of this design pattern. If you've ever seen Bunkspeed's HyperShot or HyperDrive in action you know that the visualizations they create are mind blowing, especially once you realize it's real-time. Needless to say, sitting down with this codebase was initially intimidating. It still is the largest single codebase i've worked on. Maybe some of the distributed web apps I've worked on had more code in total, but they were disparate systems that largely had no interdependence. The main Visual Studio Solution I worked on at Bunkspeed was one application with hundreds of projects all loaded at once.

I assumed this meant lots of branching, lots of areas of expertise where certain people would be responsible for a subset of the code. This was not the case. Everyone was trusted and had authority to modify, extend and refactor everything as they required it. Making a change that required a change much lower down in the system could be made by anyone. Tests ran with every build in addition to a full set of CI servers building various configurations on each check-in. And it all ran smoother than any other shop I'd been in and was easier to ramp up on then other, far simpler projects.

Bunkspeed employs every one of the SOLID principles. Systems were composed of lots of small classes with very limited responsiblity, each being abstracted by an interface. One of the reasons for the many projects rather than fewer, larger projects was that areas of responsiblity were segregated, including their interfaces being in separate DLLs so that low level changes wouldn't cause rebuilds of the entire system. Deep reaching refactors were not the norm but rather an indication that some inappropriate coupling had been done at some previous time and the refactor served to rectify the situation so that technical debt was accruing at much slower rates than is the norm. This was not some academic application of patterns from a book, but a truly agile development shop able to make significant changes with a small team in record time.

SOLID wasn't some rule set put before me, but a natural evolution of trying to make development easier. It wasn't until about 9 months ago that I read Robert C. Martin and Micah Martin's Agile Principles, Patterns, and Practices in C# because it was sitting on a co-worker's desk and for the first time I put names to the patterns I'd been applying this entire time.

Pain points of SOLID

There is definitely a different cost occured by applying SOLID to design. Most of this cost is in navigating the granularity of the design and in this tooling is an important aid to make this not only painless but more productive than the alternatives. The issue is that Visual Studio really isn't all that well suited to navigating large object hierarchies, especially when using interfaces for abstraction. There are those who will point to this pain as evidence of SOLID being a bad practice. "I don't want to have to get special tools just to do development." But if tooling really is your enemy then you probably shouldn't be working with a language like C# in the first place, because it already does rely on the many crutches VS offers up. Try writing C# without an IDE and you'll quickly understand why people love the simple and terse syntax of Ruby and other dynamic Languages. Saying "well, Visual Studio is as much tooling as I accept, beyond that it's ridiculous" is not an argument I can relate to, so if that's the objection to SOLID, I'll have to admit that I can't convince you.

The issue just is that VS does not provide efficient ways to navigate from class to class and from interface to implementers and from implementers to usage of the interface. This is where ReSharper entered the picture for me, and after adding more keybindings for some of their extended commands, the number of classes and abstractions really becomes a non-issue. I simply couldn't do development in C# without ReSharper at this point and that's more because of Visual Studio's shortcomings than anything else (eclipse, for example, provides most of the same features I rely on out of the box).

The other pain created by lack of tooling is that the larger surface area of code and the increased abstraction means that refactoring ususally touch more code than before, meaning that refactoring takes more work. However with proper tooling this also happens automatically. In addition, the flexibility of loose coupling introduced by SOLID generally makes code more pliable to refactoring.

Pain points of TDD

Finally there is the whole concept of TDD itself which to many seems like such a "making work" paradigm. Usually the examples of TDD failure pointed to are fragile tests and lost productivity due to writing tests instead of code.

Fragile tests refers to a simple change breaking a lot of tests and thereby incurring extra work instead of saving work. But if a simple change breaks a lot of tests, it's a clear indication that your design needs another look, because there seems to be coupling that is getting in the way. The only time (ideally at least) that more than one test breaks is when you change the expected behavior of a class, and in that case, refactoring the expected behavior would have included the refactoring of all dependencies, including the tests.

The lost productivity argument only holds water if you are not responsible for extension or maintenance. And all you are doing then is pushing the work that should have been yours in the first place on the poor sucker that's inheriting your legacy. It has been my experience that any time I find a bug or break something with a new feature, it's because I didn't have test coverage on the affected code. Which means I get to lose productivity when I am likely already under time pressue rather than up front.

Another part of the lost productivity argument usually refers to the amount of code required to test a particular object vs. just using that object in production code. Between DIP for wiring things up and the numerous mocking frameworks available to declare your expectations on dependencies, wiring up your test harness should be short. If there is a lot of plumbing required just to set up the test conditions something's wrong with the design.

Not dogmatic, just providing guidance

Since I arrived at practicing the SOLID principles, without being aware of them, I just can't see the dogma or beaurecracy in them. The recommendation in those 5 principles when taken as a whole is about making your life easier not forcing some philosophy down your throat.

If someone had put them front of me and told me "this is the way you must code, because good programmers do this", I'd likely be dismissive as well. Being thrown into implementing some process from a whitepaper without having seen it in practice and understood why it was useful, has a high likelyhood of leading to improper implementation which makes the presumed failure a self-fullfilling prophecy. However if taken as guidance, there is a lot of useful information there that can make projects, especially large projects, a lot less painful.

db4o 7.4 binaries for mono

As I talked about recently, the standard binaries for db4o have some issues with mono, so I recompiled the unmodified source with the MONO configuration flag. I've packed up both the debug and release binaries and you can get them here. These are just the binaries (plus license). It's not the full db4o package. If you want the full package, just get it directly from the db4o site, since the MONO config flag and have Visual Studio rebuild the package.

This package should show up on the official db4o mono page shortly as well.

db40 indexing and query performance

Indexing on db4o is a bit non-transparent, imho. There's barely a blurp in their Documentation app and it just tells you how to create an index and how to remove it. But you can't easily inspect that one exists, or whether it's being used. So i spent a good bit of time today trying to figure out why my queries were so slow, was an index created and if so, was it being used? The final answer is, if querying is slow in db4o, you're not using an index, because, OMG, you'll know when you do an indexed query.

Index basics

Given an object such as

public class Foo
{
  public string Bar;
}

you create an index, globally (meh) for that object on all databases you create thereafter, with this call:

Db4oFactory.Configure().ObjectClass(typeof(Foo)).ObjectField("Bar").Indexed(true);

So far, straight forward enough. But let's say you're using a property? Well, db4o does its magic by inspecting your underlying storage fields, so you have to index them, not the properties that expose them. That means if our object was supposed to have a readonly property Bar, like this:

public class Foo
{
  private string bar;
  public Foo(string bar)
  {
    this.bar = bar;
  }
  public string Bar { get { return bar; } }
}

then the field you need to index is actually the private member bar:

Db4oFactory.Configure().ObjectClass(typeof(Foo)).ObjectField("bar").Indexed(true);

Given this idiosyncrasy, the obvious question is "what about automatic properties?". Well, as of right now the answer is, no such luck, because you'd have to reflect the underlying storage field that is created and index it, and you don't get any guarantees that field is named the same from compiler to compiler or version to version. That probably also means, that automatic properties are dangerous all around, because you may never get your data back if the storage changes, although on that conclusion i'm just speculating wildly.

Query performance

Index in hand, I decided to populate a DB, always checking if the existing item already existed, using a db4o native query. That started at 1 ms query time and then linearly increased with every item added. That sure didn't seem like an indexed search to me. I finally discovered a useful resource on the db4o site, but unfortunately it's behind a login, so google didn't help me find it and my link to it will only take you to the login. That's a shame because this bit of information ought to be somewhere in big bold letters!

You must have the following DLLs available for Native Queries to be optimized into SODA queries, which apparently is the format that hits the index:

  • Db4obects.Db4o.Instrumentation.dll
  • Db4objects.Db4o.NativeQueries.dll
  • Mono.Cecil.dll
  • Cecil.FlowAnalysis.dll

The query will execute fine, regardless of their presence, but the performance difference between the optimized, index using query and the unoptimized native query is orders of magnitude. My queries went from 100-500ms to 0.01ms, just by dropping those DLLs into my executable directory. Yeah, that's a useful change.

Interestingly enough, the same is not required for linq queries. They seem to hit the index without the extra help (although just to even run, Mono.Cecil and Cecil.FlowAnalysis need to be present, so here you at least get an error). There currently appears to be about 1ms overhead for parsing linq into SODA, but i'll take that hit for the syntactic sugar.

Conclusions

I'm pretty happy with simplicity and performance of db4o so far. It seems like an ideal local, queryable persistence layer. The way it works does want to make me abstract my data model into simple data objects that are then converted into business entities. I'd rather have the attribute based markup of ActiveRecord, but that's not a deal breaker.

Db4o on .NET and Mono

After failing to get a cross-platform sample of NHibernate/Sqlite going, I decided to try out Db4o. This is for a simple, local object persistence layer anyhow, nothing more than a local cache, so db4o sounded perfect.

The initial DLLs for 7.4 worked beautifully on .NET but ran into problems on Mono. Apparently db4o imports FlushFileBuffers from kernel32.dll if your build target is not CF or mono. And in its call to FlushFileBuffers it uses FileStream.SafeFileHandle.DangerousGetHandle() which it not yet implemented under Mono, resulting in this exception:

Unhandled Exception: System.NotImplementedException: The requested feature is not implemented.
  at System.IO.FileStream.get_SafeFileHandle () [0x00000]
  at Sharpen.IO.RandomAccessFile.Sync () [0x00000]
  at Db4objects.Db4o.IO.RandomAccessFileAdapter.Sync () [0x00000]
  ...

I found this page on the Db4o site, which suggested just falling back to FileStream.Handle. However, that for me just resulted in this:

Unhandled Exception: System.EntryPointNotFoundException: FlushFileBuffers
  at (wrapper managed-to-native) Sharpen.IO.RandomAccessFile:FlushFileBuffers (intptr)
  at Sharpen.IO.RandomAccessFile.Sync () [0x00000]
  at Db4objects.Db4o.IO.RandomAccessFileAdapter.Sync () [0x00000]
  ...

So, i simply defined MONO as a compilation symbol in visual studio and rebuilt it. I figure the only time this code will run on Windows is during testing, so treating it as mono is fine. And that did solve my issues and i now have a DLL for db40 7.4 that works beautifully across .NET and mono from a single build.

Being a Linq nut, I immediately decided to skip the Native Query syntax and dive into using the Linq syntax instead. Which worked great on mono 2.0.1, but unfortunately on the current Redhat rpm (stuck back in 1.9.1 lang), the Linq implementation isnt' quite complete and you get this:

Unhandled Exception: System.NotImplementedException: The requested feature is not implemented.
  at System.Linq.Expressions.MethodCallExpression.Emit (System.Linq.Expressions.EmitContext ec) [0x00000]
  at System.Linq.Expressions.LambdaExpression.Emit (System.Linq.Expressions.EmitContext ec) [0x00000]
  at System.Linq.Expressions.LambdaExpression.Compile () [0x00000]
  at Db4objects.Db4o.Linq.Expressions.SubtreeEvaluator.EvaluateCandidate (System.Linq.Expressions.Expression expression) [0x00000]
  ...

But falling back from this syntax:

var items = from RosterItem r in db where r.CreatedAt > DateTime.Now.Subtract(TimeSpan.FromMinutes(10)) select r;

to the NativeQuery syntax (with delegates replaced by lambda's): `db.Query`
var items = db.Query<RosterItem>(r => r.CreatedAt > DateTime.Now.Subtract(TimeSpan.FromMinutes(10)));

It's still a fairly compact and straight forward syntax, so until i finish setting up my own Centos mono RPMs, i'll stick to this syntax.

I need to run db4o through some more serious paces, but I like what I see so far.

Comparison with default(T)

I was working on a generic class that I had limited to where : class because I wanted to use null as valid return value. Well, then I needed to use the class on Guid, which is a value type. So I replaced all return null with return default(T). That was fine except for my Enumerator which used null to yield break out of the iteration. Unfortunately

  if(t == default(T)) {
    yield break;
  }

wasn't legal either. Then i thought, how about

  if(t.Equals(default(T))) {
    yield break;
  }

which compiles just fine, but of course throws a NullArgumentException, since I am after all looking for a null value.

After some digging I finally came across the solution:

  if(Comparer<T>.Default.Compare(t, default(T)) == 0) {
    yield break;
  }

and that did the trick.

Saying LINQ is about databases is missing its true benefits

Just came across a long discussion about LINQ in Java on the ODBMS blog (thanks to Miguel's tweet). There is some excellent discussion in there, but aside from a couple of people the discussion seemed to largely center on

  1. It's from MS so it's a bad idea to copy, and
  2. I don't think it adds anything over normal SQL syntax

The first is an unfortunate dismissal of a very powerful functional language construct because of its origin. And the second illustrates that the commenter does not truly understand what LINQ brings to the language in the first place.

Of course, I'd bet that 90% of .NET developers, if polled, would also equate LINQ with "type-safe SQL in the language", so this isn't a dig against Java people. Hopefully as Parallel LINQ gains some traction, this simplification will loose it's hold on people.

LINQ or Language INtegrated Query is really a functional way of expression operations on collections. And if you decompose a lot of code, anywhere where you are using loops to manipulating collections, LINQ is likely to create a more concise and powerful expression for the same operation. And being functional, the implementation of how LINQ does this is opaque to the caller. The caller simply describes what operations should be done on the data sources, allowing for optimization of the operations based on the data sources. That means they could be in a database, they could come from XML, or REST calls or simply exist as an in-memory object graph. But none of these things change the transformations desired.

I've only used LINQ once for SQL, although I use LINQ to objects, i.e. against IEnumerable<T>, almost daily and it's done away with a lot of foreach with temporary variables, temporary lists, etc. But even that scenario against the apparently now deprecated DLINQ or Linq2Sql illustrated how it wasn't just about replacing SQL with type-safe syntax, it allowed me to use one syntax for both database and local operations.

This project included doing a bunch of analytical processing against the data, including projection combining a number data sources. Not all of this was expressible in SQL and some of it wasn't a wise use of live DB queries, and performing the additional work in memory was a lot more efficient. Traditionally this type of work would exhibit a fairly obvious syntactic break between the local and the DB operations. And moving some part (say a sort of a sub-set) from SQL to local or vice versa would be a significant re-write. But using LINQ, the syntax was identical, it was merely a matter of deciding at what point the query should be turned into concrete data vs. a cursor against the DB. This is simply done by turning any IEnumerable to a List, forcing immediate execution of the query represented by the IEnumerable source. Either way, local or remote, the syntax stayed the same the power of where processing should happen was in my hands.

I do support the goal of getting something akin to LINQ in java, but I sure hope they don't attack the problem by creating some DB-centric query DSL. The greatest benefit of LINQ in .NET, imho, is that instead of hacking its syntax into the language, the building blocks of anonymous delegates, lambda expressions, anonymous types, var syntax and object initializers. Each one of these pieces is a fundamental part of C# 3.0 and can be used independently, but together they allow LINQ to exist. Discussion of common fluent APIs for databases or whole new languages like SBQL miss the benefit of "Language Integrated" in LINQ.

Going to be interesting to follow how this evolves, since I personally think that LINQ is one of the key differentiators between Java and C# that isn't just syntactic sugar (even though many think it is just that).

Dream access control

Just finished an article over on the MindTouch blog about tweaking Dream's default access patterns. I really like how Dream uses cookies, something you don't often see in REST services. Generally it's all about X-My-Cool-Auth-Header business, which is yet another manual burden for developers. Not sure if this originated because people did raw http requests and either didn't know that most http request mechanisms have cookie support (even curl has a cookie jar), or whether it was a dislike of cookies.

The article also briefly touches on Prologues and Epilogues, a topic I need go into with more detail some time in the future. Basically every Feature call can have n pre and post actions that can do anything from checking authentication to mutating the request (think accepting data in json or Xml and having a prologue and epilogue do transformations on the way in and out so that the feature itself doesn't have to worry about the data format but can assume that it always gets Xml. The system kind of reminds me of apache handler chaining from mod_perl.