This is closely related to my last post on deferred execution gotchas and its basically more "if you inline delegated code, you may easily overlook scope side-effects". This time it's about dealing with foreach and using the local each item for deferred execution.
public void SpawnActions()
{
foreach (ActionContext context in contexts)
{
int id = context.Id;
Action<int> callback =
(workerNumber) =>
{
Console.WriteLine("{0} Id: {1}/{2}", workerNumber, id, context.Id);
};
ThreadPool.QueueUserWorkItem(new WaitCallback(FutureExecute), callback);
}
}
public void FutureExecute(object state)
{
int id = worker++;
Action<int> callback = state as Action<int>;
Thread.Sleep(500);
callback(id);
}
So while the foreach scope variable context is kept alive for the deferred execution, it turns out that foreach re-uses the variable on each pass through the loop and therefore when the Action is later executed, each one has a reference to the last context. So what we need to do is create a true local variable for context so that the lambda's scope can hold on to the reference we want.
Here's something I tracked down with no help from error messages:
When you copy a user control in Silverlight 1.1 from one project to another the Xaml that the control loads will have it's Build Action set to SilverlightPage. When you then run your project and try to create an instance of that control you'll get the ever so informative AG_E_INVALID_ARGUMENT. All you need to do to fix it, is set the Build Action to Embedded Resource again. Tada!
I love declarative definition of UI with behavior wired to static code. But man, at its current state, the debugging support for it just isn't there. I mean it's bad enough that having strings in the declarative side to link to actions that won't be updated by normal refactoring, nor will they show up as references, but at this state Xaml brings the worst part of scripting languages to compile-time checked coding:
i.e. I can call a downloader and inline pass it a bit of code to execute once the download completes. But the catch of course is that looking at the code, and following the usual visual tracing of flow hides the fact that c.Initialize(dto) doesn't get called until some asynchronous time in the future. Now, that's always been a side-effect of delegates, but until they became anonymous and inline, the visual deception of code that looks like it's in the current flow scope but isn't wasn't there.
What happened was that I needed my main routine to execute some code after FloatContainer was initialized, and by habit i created an Initialized event on FloatContainer. Of course this was superfluous, since my lambda expression called the synchronous Initialize, i.e my action could be placed inline after that call to c.Initialize(dto) and be guaranteed to be called after initialization had completed.
This scenario just meant I created some superfluous code. However, I'm sure as I use lambda expression more, there will be more pitfalls of writing code that doesn't consider that its execution time is unknown, as is the state of the objects tied to the scope of the expression.
This last bit about objects tied to the expression scope is especially tricky and I think we will see some help in terms of Immutable concepts weaving their way into C# 3.x or 4.0, as the whole functional aspect of lambda expressions really work best when dealing with objects that cannot change state. Eric Lippert's been laying the groundwork in a number of posts on the subject and while he constantly disclaims that his ponderings are not a roadmap for C#, I am still going to assume that his interest and recognition of the subject of Immutables will have some impact in a future revision of the language. Well, I at least hope it does.
With lambda expressions in C#, the Func
generic delegate and it's variations have been getting a lot of attention. So naturally, you might think that the lambda syntax is just a shortcut for creating anonymous delegates, whether they return values or not.
First let's look at the evolution of delegates from 1.1 to now. Delegates, simply are the method equivalent of function pointers. They let you pass a method call as an argument for later execution. The cool thing (and a garbage collection pitfall) is that a delegate creates a lexical closure, i.e. the delegate carries with it the object that the method gets called on. For garbage collection this means that a delegate prevents an object from being collection. That's why it's important to unsubscribe from those events you subscribed to.
But I digress. Let's define a delegate that returns an Integer and a method that matches that delegate:
delegate int IntProducerDelegate();
public int x = 0;
public int IntProducer()
{
return x++;
}
With the original .NET 1.0 syntax we'd create the delegate like this:
IntProducerDelegate p1 = new IntProducerDelegate(IntProducer);
Now we can call p1() and get an integer back, and since it's closure, each time we call p1() the originating objects x increases as does our return value.
This got rid of the need to create a method just to pass along a closure that manipulated our object at a later time. The other thing that anonymous delegates re-inforce is that delegates just care about signature. IntProducerDelegate can get assigned any delegate that takes no argument and returns an int. That sounds like a perfect scenario for generics and in .NET 3.5, we got just that, a set of generic delegates called Func. Using Func, we quickly get to our lambda expression replacing the original delegate syntax like this:
// create a new Func delegate just like the IntProducerDelegate
IntProducerDelegate p3 = new Func<int>(IntProducer);
// which means that we don't need IntProducerDelegate at all anymore
Func<int> p4 = delegate { return x++; };
// and the anonymous delegate can also be shorthanded with a lambda expression
Func<int> p5 = () => { return x++; };
// which says, given that we take no argument "()", execute and return the following "return x++;"
However, before there ever was Func, .Net 2.0 introduced the generic delegate Action, which is a natural counterpart to Func, encapsulating a method that does not return anything. Following through the example of the producer, we'll create a consumer like:
delegate void IntConsumerDelegate(int i);
public void IntConsumer(int i)
{
Console.WriteLine("The number is {0}", i);
}
Now following the same evolution of syntax we get this:
IntConsumerDelegate c1 = new IntConsumerDelegate(IntConsumer);
IntConsumerDelegate c2 = new Action<int>(IntConsumer);
Action<int> c3 = delegate(int i) { Console.WriteLine("The number is {0}", i); };
Action<int> c4 = (i) => { Console.WriteLine("The number is {0}", i); };
So lambda syntax can be used to create either a Func or an Action. And that also means that we never have to explicitly need to create another delegate, being able to use a variation of these two generic delegates as our arsenal for storing lambda expressions of all kinds.
UPDATE: Posted a follow-up here. I've finally had legitimate use for LINQ to Objects, not just to make the syntax cleaner, but also to simplify the underlying code and provide me a lot of flexibility without significant work.
I have a tree of objects that have both a type and a name. The name is unique, the Type is not. The interface is this:
public interface INode
{
int Id { get; }
string Name { get; set; }
string Type { get; set; }
List<INode> Children { get; }
}
I want to be able to find a single named Node in the tree and I want to be able to retrieve a collection of all nodes for a particular type. The searchable interface could be expressed as this:
Both require me to walk the tree and examine each node, so clearly I just want to have one walk routine and generically evaluate the node in question. In C# 2.0 parlance, that means I could pass an anonymous delegate into my generic find routine and have it recursively iterate through all the children. I also pass along a resultset to be populated.
The signature for the evaluation delegate looks like this:
delegate bool FindDelegate(INode node);
but since I'm using C# 3.0 (i.e. .NET 3.5) I can use lambda expressions to avoid creating a delegate and simplify my syntax. Instead of FindDelegate, I can simply use Func<INode,bool>:
// Instead of this:
private void Find(INode node, List<INode> resultSet, FindDelegate findDelegate);
// called like this for a Name search:
Find(this, resultSet, delegate(INode node) { return node.Name == name; });
// I can use this:
private void Find(INode node, List<INode> resultSet, Func<INode, bool> f)
// called like this:
Find(this, resultSet, node => node.Name == name);
Thus giving me the following implementation for ISearchableNode:
public INode FindByName(string name)
{
List<INode> resultSet = new List<INode>();
Find(this, resultSet, x => x.Name == name);
return resultSet.FirstOrDefault();
}
public IEnumerable<INode> FindByType(string type)
{
List<INode> resultSet = new List<INode>();
Find(this, resultSet, x => x.Type == type);
return (IEnumerable<INode>)resultSet;
}
private void Find(INode node, List<INode> resultSet, Func<INode, bool> f)
{
if (f(node))
{
resultSet.Add(node);
}
foreach (INode child in node.Children)
{
Find(child, resultSet, f);
}
}
Problem solved, move on... Well, except there is significant room for improvement. Here are the two main issues that ought to be resolved:
Syntax is limited to two types of searches and exposing the generic find makes for an ugly syntax. It would be much nicer if queries to the tree could be expressed in LINQ syntax.
It's also inefficient for the Name search, since I'm walking the entire tree, even if the first node matched the criteria.
In order to use LINQ to objects, I need to either create a custom query provider or implement IEnumerable. The latter is significantly simpler and could be expressed using the following interface:
public interface IQueryableNode : IEnumerable<INode> { }
Ok, ok, I don't even need an interface, I could just implement IEnumerable... But what does that actually mean? In the simplest sense, I'm iterating over the node's children, however, I also with do descend into the children's children and so on. So a simple foreach won't do. I could just do the same tree walking with a resultset as I did above and return the Enumerator of the resulting list to implement the interface, but C# 2.0 introduced a much more useful way to implement non-linear Iterators, i.e. the yield keyword. Instead of building a list to be interated over, yield let's the iterating code return values as they are found, which means it can be used for recursive iteration. Thus the GetEnumerator is implemented simply as follows
#region IEnumerable<Node> Members
public IEnumerator<INode> GetEnumerator()
{
yield return this;
foreach (Node child in Children)
{
foreach (Node subchild in child)
{
yield return subchild;
}
}
}
#endregion
#region IEnumerable Members
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
#endregion
Nice and simple and ready for LINQ.
Searching for all Nodes of a type becomes
var allBar = from n in x
where n.Type == "bar"
select n;
foreach (Node n in allBar)
{
// do something with that node
}
and the search for a specifically named node becomes
INode node = (from n in x
where n.Name == "i"
select n).FirstOrDefault();
But the real benefit of this approach is that I don't have hard-coded search methods, but can express much more complex queries in a very natural syntax without any extra code on the Node.
As it turns out, using yield for the recursive iteration also solved the second issue. As yield returns values as it encounters them during iteration, the search doesn't happen until the query is executed. And one of the side effects of LINQ syntax is that creating a query does not execute it until the result set is iterated over. Therefore, FirstOrDefault() actually short-circuits the query as soon as the first match (and in case of Name, it's going to be the only match) is hit.
I have yet to decide whether Extension Methods in C# are a boon or bane. I've already several times, been frustrated by Intellisense not showing me a method that was legal somewhere else, until I could figure out what using statement brought that extension method into scope. On one hand Extension Methods can be used to simplify code, on the other, I see them as the source of much confusion as they become popular.
Worse yet, they have potential for more about than the blink tag, imho.
The one place I see extension methods being instrumental is in defining fluent interfaces, yet another practice I have yet to decide whether I am in favor of or not. Partially, because I don't see them as intrinsically easier to read. Partially because they allow for much syntactic abuse.
So today, I created a fluent interface for an operation that I wish was just support in the language in the first place -- the between operator. It exists in some SQL dialects and is a natural part of so many math equations. I wish I could just write:
if( 0.5 < x < 1.0 )
{
// do something
}
Instead, I'll settle for this:
if( x.Between(0.5).And(1.0) )
{
// do something
}
The first part is easy, it's just an Extension Method on double. And if I just had it take the lower and upper bound, then we would have been done. But this is where the fluent interface bug bites me and I want to say And. This means, that Between can't return a boolean. It needs to return the result of the lower bound test and the value to be tested. That means that Between returns a helper class, which has one method And, which finally returns the boolean value.
public static class DoubleExtensions
{
public static BetweenHelper Between(this double v, double lower)
{
return new BetweenHelper(v > lower, v);
}
public struct BetweenHelper
{
public bool passedLower;
public double v;
internal BetweenHelper(bool passedLower, double v)
{
this.passedLower = passedLower;
this.v = v;
}
public bool And(double upper)
{
if (passedLower && v < upper)
{
return true;
}
else
{
return false;
}
}
}
}
That's a lot of code for a simple operation and it's still questionable whether it really improves readability. But it is a common enough operation if you have a lot of bounds checking, that it might be worth throwing into a common code dll. I've yet to make up my mind, I mostly did it because i wanted to play with the syntax.
Recently, I've had a need to create proxy objects in two unrelated projects. What they had in common was that they both dealt with .NET Remoting. In one it was traditional Remoting between machines, in the other working with multiple AppDomains for Assembly flexibility. In both scenarios, I had a need for additional proxies other than the Remoting created Transparent proxy and the Castle project's DynamicProxy proved invaluable and versatile. It's a bit sparse on documentation, but for the most part, you should be able to figure things out by playing with it and lurking around their forum.
If you're not familiar with the Castle project, check it out, because DynamicProxy is really only a supporting player in an excellent collection of useful tools. From their Inversion of Control Containers MicroKernel/Windsor to MonoRail and ActiveRecord (built on NHibernate), there is a lot there that can make your life easier.
But why would I need to create proxies, when the Remoting infrastructure in .NET takes care of this for you with Transparent Proxies? Well, let me describe both scenarios:
From my experience with .NET Remoting, it's dead simple to do something simple. But it really is best suited for WebService stateless calls, because there isn't a whole lot exposed to add quality of service plumbing. The transparent proxy truly is transparent until something fails. Same is true on the server side, where you don't get a lot of insight into the clients connected to you.
And then there are events, which, imho, are one of the greatest thing about doing client/server programming with Remoting. Those are painful in two ways, having to have a MarshalByRefObject helper proxy the event calls and pruning dead clients on the server which you won't find out about until you try to invoke their event handler.
But those shortcomings are not enough reason to fall back to sockets and custom/stateful wire protocols. Instead I like to wrap my transparent proxy in another proxy that has all the plumbing for maintaining the connection, pinging the server and intelligently handling failure. Originally I created them by hand, but I just converted my codebase over to use DynamicProxy the other day.
Using CreateInterfaceProxyWithoutTarget and having the proxy derive from my plumbing baseclass automagically provides me with an object that looks like my target interface by wraps the Remoting proxy with my quality of service code.
public static RemotingClient<T> GetClient(Uri uri)
{
Type t = typeof(T);
ProxyGenerator g = new ProxyGenerator();
ProxyGenerationOptions options = new ProxyGenerationOptions();
options.BaseTypeForInterfaceProxy = typeof(RemotingClient<T>);
RemotingClient<T> proxy =
(RemotingClient<T>)g.CreateInterfaceProxyWithoutTarget(
t, new Type[0], options, new ProxyInterceptor<T>());
proxy.Uri = uri;
return proxy;
}
I'll do another post later just on Remoting, since the whole business of getting events to work was a bit of a labor and isn't documented that well.
The second project started with wanting to be able to load and unload plug-ins dynamically and has now devolved into a framework for auto-updating components of an App at runtime.
This involves loading the assemblies providing the dynamic components into new AppDomains, so that we can unload the assemblies again by unloading the AppDomain. Instances of the components class are then marshaled across the AppDomain boundary by reference so that the main AppDomain never loads the dynamic assembly. This way, when a component needs to be updated, I can dump that AppDomain and recreate it.
The resilience of the connection isn't in question here, although there is need for quite a bit of plumbing to get references to the remoted components disposed before the AppDomain can be unloaded. Again, a DynamicProxy can help on the main AppDomain side by acting as a facade to the actual reference, so that you can reload the underlying assembly and recreate the reference without the main application having to be aware of it.
But I haven't gotten that far, nor have I decided whether that would be the best way, rather than an explicit disposal and recreation of the references.
Where DynamicProxy comes to a rescue here is when your object to be passed across the boundary isn't derived from MarshalByRefObject. This could be a scenario, where you are trying to use a component that's already derived from another baseclass, so adding MarshalByRefObjectisn't an option. Now DynamicProxy with its ability to provide a baseclass for the proxy gives you a sort of virtual multiple inheritance.
This is also an ideal scenario for using Castle.Windsor as the provider of the component. Using IoC, I was able to create a generic common dll that contains all the interfaces as well as a utility class for managing the components that need to be passed across the boundary, this class never has to know anything about the dynamic assembly, other than that the assembly stuffs its components into the AppDomain's Windsor container.
The resulting logic for creating an instance that can be marshaled across AppDomain boundaries looks something like this:
public T GetInstance<T>()
{
Type t = typeof(T);
if (!t.IsInterface)
{
throw new ArgumentException("Type must be an interface");
}
try
{
T instance = container.Resolve<T>();
if (typeof(MarshalByRefObject).IsAssignableFrom(instance.GetType()))
{
return instance;
}
else
{
ProxyGenerator generator = new ProxyGenerator();
ProxyGenerationOptions generatorOptions = new ProxyGenerationOptions();
generatorOptions.BaseTypeForInterfaceProxy = typeof(MarshalByRefObject);
return (T)generator.CreateInterfaceProxyWithTarget(t, instance, generatorOptions);
}
}
catch (Castle.MicroKernel.ComponentNotFoundException)
{
return default(T);
}
}
All in all, DynamicProxy is something that really should be part of the .NET framework. Proxying and Facading patterns are just too common to only support them via the heavier and MarshalByRefObject dependent remoting infrastructure.
The version of DynamicProxy I was using was DynamicProxy2, which is part of the Castle 1.0 RC3 install. It's a lot more versatile than the first DynamicProxy and deals with generics, but mix-ins are not yet implemented in this version. However, if you just need a single mix-in, specifying the base class for your proxy can go a long way to solving that limitation.
Just a quick thought on on Automatic properties in C# 3.0 (.NET 3.5, Orcas, whathaveyou). I, like most .NET developers, have spent too much time writing
private string foo;
public string Foo
{
get { return foo; }
set { foo = value; }
}
This can now be replaced with
public string Foo { get; set; }
Now that's nice and all, but for the most part it doesn't seem like a big step up from just making your field public. Now, you do get the benefit of being able to use this property in places where reflection is used for exposing properties, i.e. in most data-binding scenarios, which don't accept public fields. And it let's you implement interfaces, since they also can't define public fields. But that's a lot more useful, at least to me is replacing
private string foo;
public string Foo { get { return foo; } }
with
public string Foo { get; private set; }
Now you can create those read-only properties with very simple, clean syntax. That'll clean up a lot of code for me.
But finally we still have the scenario where properties have side-effects, like calling the event trigger for INotifyPropertyChanged. This cannot be expressed with automatic properties and we're back to defining a separate private field. What I would love to see is some syntactic sugar along the lines of
I know this doesn't buy that much in syntax terseness (the get is shorter and you don't have to define the type of _foo), but it means that the private storage used by the property is part of the property definition, which makes the definition that much more easy to read, imho. That, or allow pre and post conditions for set in Automatic Properties, although the syntax i can think of for that doesn't seem cleaner than the above.
Recently I've been dealing with a lot of bug fixing that I can't find a good way to wrap tests around. Which is really annoying, because it means that as things get refactored these bugs can come back. These scenarios are almost always UI related, whether it's making sure that widgets behave as expected, or monitoring the visual state of an external application I'm controlling (Live For Speed, in this case). What all these problems have in common is that the recognition of a bug existing can only be divined by an operator, because somewhere there is lack of instrumentation that could programatically tell the state I'm looking for. And to paraphrase the old management saying, "You can't test what you can't measure".
My best solution is the usual decoupling of business logic from UI. Make everything a user can do an explicit function of your model and now you can test the functions with unit tests. At least you business logic is solid, even if your presentation gets left in the cold. And depending on your UI layer, a lot of instrumentation can still be strapped on. WinForms controls are generally so rich that nearly everything you would want to test for you can probably query from the controls. But things that you can test and see within a second may be a lot of lines of code to test programatically and of course go right out the window when you refactor your UI. And if your trying to test the proper visual states of a DataGridView and its associated bindings, then you're in for some serious automation pains.
I know that business logic is the heart of the application, but presentation is what people interact with and if it's poor, then it doesn't matter how kick ass your back end is. So for the time being that means that UI takes a disproportionate amount of time to run through its possible states and it's something I would like to handle more efficiently.
With this version, LFSLib.NET gains support for the InSim Relay currently in testing (see this Forum thread for details). The InSim relay allows you to get a listing of servers connected to the relay and send and receive InSim packets to them without having to directly connect to them. To create an InSimHandler to the relay simply call:
IInSimRelayHandler implements a subset of the main InSimHandler, discarding methods, properties and events that don't make sense and adding a couple of relay specific ones.
The remainder of the changes are minor tweaks and bug fixes:
Added auto-reconnect feature to handler (really only works for TCP, since UDP is connectionless)
CameraPositionInfo now allows ShiftU & related properties to be set
Updated TrackingInterval to accept values as low as 50ms
BUGFIX: Fixed race condition where a LFSState event could be fired on a separate thread before the main thread noticed that LFS was connected, allowing for invalid NotConnected exceptions to occur.
BUGFIX: Fixed Ping event (was looking for the wrong packet)
BUGFIX: RaceTrackPlayer.HandicapIntakeRestriction always returned zero