Skip to content

Iloggable

My new Visual Studio Dev workhorse: Macbook Pro

I'm just starting some new contract work that requires a bit more on-site than usual and instead of syncing up my various dev environments all the time, I decided that it's time for a new laptop. My current laptop is a 15" Powerbook G4. It's been a great machine, but for the past couple of years it's mostly been an expensive Email/Browser appliance with the occasional Eclipse/Java diversion.

This time I needed something to let me do VS.NET development and all the other MS related things that come across my path. Now, obviously my previous laptop choice shows i'm not impartial, but I certainly wasn't going to settle for less that what I had. I went through this last time as well and the conclusion is not only the same, but decidedly more in favor of a Macbook this time around.

Go ahead, try it.. Find a laptop like this: slim profile, powerful CPU, large widescreen LCD, light and sturdy. By the time you find the PC equivalents, you don't get to be cheaper. And for some reason, PCs still come mostly in two flavors: 1) light and small screen or low power and 2) giant hulking powerhouses that are not that much more convenient than the original PC Portable's. Add to that, that I have yet to find another laptop that comes with a sturdy shell like the Macs -- If you look at some of the dents my old Powerbook has sustained, then you'd realize that I would have cracked open any plastic shelled laptop a long time ago.

Ok, enough with the rationalization already, buy your Mac, be a fanboy. Don't come whining when it can't cut it as your dev machine...

I have to admit that the last 3 days have included more reboots than I'd care to admit, but I finally have everything configured just right and I don't mind saying that this rig is freakin' sweet!

The setup is as follows: 15" 2.4GHz Core Duo Macbook Pro w/ 2GB RAM & 160MB HD. 140GB Leopard partition. 20GB bootcamp partition (short-sighted mistake on sizing) with Win XP Pro, Visual Studio Orcas, etc. Vmware Fusion 1.1 Beta running boot camp partition as VPC image under Leopard.

So what pitfalls have I come across?

Bootcamp was a pain to get going

Vmware fusion was just about the most painless windows install I've ever done. It asked me for the key before it started and took care of everything until it booted into XP. Bravo!

Boot camp on the other hand complained about my disk being corrupt (Brand new mac, mind you). Rebooted from CD, ran disk repair, was told there was nothing to repair. Tried boot camp again. Success! Started XP install on the formatted partition that boot camp set up. Got to the reboot early in the install, Mac rebooted, complained it couldn't boot from CD and the HD had errors, please press any key, but no keys produced results. Hard rebooted, ejected CD, tried to restart install via bootcamp, but same problem. Removed bootcamp partition, started over, this time manually formatting the disk during install to be sure. Some more issues aside, finally boot camp installed. Only then do I find out that VMware Fusion can use the boot camp partition as a virtual image. Now that's useful. Except I had sized it as the emergency fall back windows install. Doh.. Well, I'll just mount the Mac disk on the windows side for storage. Then all my files are inside the FileVault as well.

XP Activation with bootcamp/vmware fusion

After finding out I could use a single install as a full native and a virtual instance, I was thrilled and started up the bootcamp partition. VMware had to tweak it some, but it came up. XP Activation got invalidated because I apparently changed the hardware too much. Same happened when I rebooted into XP directly. Now it seems ok, but Redmond has received about 4 activations from the same XP install in 2 days. No, I'm not frantically installing on all my friend's PCs, I'm just trying to get one install stable.

Vmware Fusion Unity and Leopard Spaces have some issues

Now, I have no right to expect something as funky as Unity to work with an OS feaure that was released 4 days ago, so I'm not really bitching, just warning. For me using Unity and Spaces caused spaces to switch back and forth automatically at about twice a second several times when I had two Win windows in different spaces. And my Mac apps all lost their windows. So for now, XP runs fullscreen in its own Space and it's all wonderfully stable.

WPF likes hardware acceleration

So in the middle of the night I wake up in a panic. Everything was working so wonderfully, but had I missed something? Well, I kept saying "i don't care that virtualization doesn't support the GPU, i'm not planning to play games on this machine". Hahaha.. But what about WPF? It uses the GPU to render all that fancy vector goodness! Did I just buy another email appliance? I fired up some WPF samples and it worked fine. As they got fancier, things got a bit choppier and the 3D was a slideshow. But work, it did. So, WPF gracefully falls back to software only mode. I rebooted into native XP and WPF was running in all its glory. Yay, all is good.

Ok, this is day 3 with my new rig and I'm very happy on both the Mac and Windows side. I even have a single dev machine that can test all browsers currently supported by Silverlight. And once Moonlight drops, I'll just fire up a VM of Fedora and cover that use case as well. Let's see if the euphoria lasts.

DynamicProxy rocks!

Recently, I've had a need to create proxy objects in two unrelated projects. What they had in common was that they both dealt with .NET Remoting. In one it was traditional Remoting between machines, in the other working with multiple AppDomains for Assembly flexibility. In both scenarios, I had a need for additional proxies other than the Remoting created Transparent proxy and the Castle project's DynamicProxy proved invaluable and versatile. It's a bit sparse on documentation, but for the most part, you should be able to figure things out by playing with it and lurking around their forum.

If you're not familiar with the Castle project, check it out, because DynamicProxy is really only a supporting player in an excellent collection of useful tools. From their Inversion of Control Containers MicroKernel/Windsor to MonoRail and ActiveRecord (built on NHibernate), there is a lot there that can make your life easier.

But why would I need to create proxies, when the Remoting infrastructure in .NET takes care of this for you with Transparent Proxies? Well, let me describe both scenarios:

Resilient Client/Server operations with Remoting

From my experience with .NET Remoting, it's dead simple to do something simple. But it really is best suited for WebService stateless calls, because there isn't a whole lot exposed to add quality of service plumbing. The transparent proxy truly is transparent until something fails. Same is true on the server side, where you don't get a lot of insight into the clients connected to you.

And then there are events, which, imho, are one of the greatest thing about doing client/server programming with Remoting. Those are painful in two ways, having to have a MarshalByRefObject helper proxy the event calls and pruning dead clients on the server which you won't find out about until you try to invoke their event handler.

But those shortcomings are not enough reason to fall back to sockets and custom/stateful wire protocols. Instead I like to wrap my transparent proxy in another proxy that has all the plumbing for maintaining the connection, pinging the server and intelligently handling failure. Originally I created them by hand, but I just converted my codebase over to use DynamicProxy the other day.

Using CreateInterfaceProxyWithoutTarget and having the proxy derive from my plumbing baseclass automagically provides me with an object that looks like my target interface by wraps the Remoting proxy with my quality of service code.

public static RemotingClient<T> GetClient(Uri uri)
{
  Type t = typeof(T);
  ProxyGenerator g = new ProxyGenerator();
  ProxyGenerationOptions options = new ProxyGenerationOptions();
  options.BaseTypeForInterfaceProxy = typeof(RemotingClient<T>);
  RemotingClient<T> proxy = 
    (RemotingClient<T>)g.CreateInterfaceProxyWithoutTarget(
      t, new Type[0], options, new ProxyInterceptor<T>());
  proxy.Uri = uri;
  return proxy;
}

I'll do another post later just on Remoting, since the whole business of getting events to work was a bit of a labor and isn't documented that well.

Loading and unloading Assemblies on the fly

The second project started with wanting to be able to load and unload plug-ins dynamically and has now devolved into a framework for auto-updating components of an App at runtime.

This involves loading the assemblies providing the dynamic components into new AppDomains, so that we can unload the assemblies again by unloading the AppDomain. Instances of the components class are then marshaled across the AppDomain boundary by reference so that the main AppDomain never loads the dynamic assembly. This way, when a component needs to be updated, I can dump that AppDomain and recreate it.

The resilience of the connection isn't in question here, although there is need for quite a bit of plumbing to get references to the remoted components disposed before the AppDomain can be unloaded. Again, a DynamicProxy can help on the main AppDomain side by acting as a facade to the actual reference, so that you can reload the underlying assembly and recreate the reference without the main application having to be aware of it.

But I haven't gotten that far, nor have I decided whether that would be the best way, rather than an explicit disposal and recreation of the references.

Where DynamicProxy comes to a rescue here is when your object to be passed across the boundary isn't derived from MarshalByRefObject. This could be a scenario, where you are trying to use a component that's already derived from another baseclass, so adding MarshalByRefObjectisn't an option. Now DynamicProxy with its ability to provide a baseclass for the proxy gives you a sort of virtual multiple inheritance.

This is also an ideal scenario for using Castle.Windsor as the provider of the component. Using IoC, I was able to create a generic common dll that contains all the interfaces as well as a utility class for managing the components that need to be passed across the boundary, this class never has to know anything about the dynamic assembly, other than that the assembly stuffs its components into the AppDomain's Windsor container.

The resulting logic for creating an instance that can be marshaled across AppDomain boundaries looks something like this:

public T GetInstance<T>()
{
  Type t = typeof(T);
  if (!t.IsInterface)
  {
    throw new ArgumentException("Type must be an interface");
  }
  try
  {
    T instance = container.Resolve<T>();
    if (typeof(MarshalByRefObject).IsAssignableFrom(instance.GetType()))
    {
      return instance;
    }
    else
    {
      ProxyGenerator generator = new ProxyGenerator();
      ProxyGenerationOptions generatorOptions = new ProxyGenerationOptions();
      generatorOptions.BaseTypeForInterfaceProxy = typeof(MarshalByRefObject);
      return (T)generator.CreateInterfaceProxyWithTarget(t, instance, generatorOptions);
    }
  }
  catch (Castle.MicroKernel.ComponentNotFoundException)
  {
    return default(T);
  }
}

All in all, DynamicProxy is something that really should be part of the .NET framework. Proxying and Facading patterns are just too common to only support them via the heavier and MarshalByRefObject dependent remoting infrastructure.

The version of DynamicProxy I was using was DynamicProxy2, which is part of the Castle 1.0 RC3 install. It's a lot more versatile than the first DynamicProxy and deals with generics, but mix-ins are not yet implemented in this version. However, if you just need a single mix-in, specifying the base class for your proxy can go a long way to solving that limitation.

Vista vs. XP

About a month ago, I finally gave in and upgraded my main dev machine. It was getting a little slow for the my various build tasks and running multiple instances of LFS for my test environment just didn't work. That Bioshock had just been released had nothing to do with it, really!

Anyway, I didn't actually upgrade as much as build a new machine and use my old dev as another test target. So I figured I might as well go for Vista on this machine. I'd heard plenty of anecdotes, usually sticking to one end of the love/hate spectrum or the other and I wanted to find out where i'd fit.

Well, it's been a month with Vista, but with regular excursions to XP on other machines and overall I am right at the meh centerline of the spectrum. Vista gives me no trouble, anymore than any new OS install does until i tweak it to my liking. There are various UI aspects that I like. But if I was forced to use only XP, I'd just shrug my shoulders. I really can't find anything about the OS that I care deeply enough to make me prefer or dislike it more than XP.

So, no it's not the worst product MS has ever put out and clearly a sign that they're finally loosing the grip on the desktop stranglehold, as you will hear from one extreme, nor is it a next generation OS solving all the problems we've been having with the previous generation OSs. It's just another version of Windows, if you ask me. From my experience I just have to discount tales of massive downgrade rushes from users as FUD that's becoming somewhat self-fulfilling. Sure, you ought to have good hardware for Vista. But what software that is put out today doesn't work better on today's hardware vs. the machine you bought when XP was fresh.

Even for my latest Redhat Fedora install I finally had to admit that my trusty home server from 5 years ago was a bit too weak to handle it. Sure, proportionally Fedora needs much less hardware, but then I don't try to put it through heavy UI lifting, which is, imho, where most horsepower, for better or for worse, goes these days.

Automatic properties syntax wish

Just a quick thought on on Automatic properties in C# 3.0 (.NET 3.5, Orcas, whathaveyou). I, like most .NET developers, have spent too much time writing

private string foo;

public string Foo
{
 get { return foo; }
 set { foo = value; }
}

This can now be replaced with

public string Foo { get; set; }

Now that's nice and all, but for the most part it doesn't seem like a big step up from just making your field public. Now, you do get the benefit of being able to use this property in places where reflection is used for exposing properties, i.e. in most data-binding scenarios, which don't accept public fields. And it let's you implement interfaces, since they also can't define public fields. But that's a lot more useful, at least to me is replacing

private string foo;

public string Foo { get { return foo; } }

with

public string Foo { get; private set; }

Now you can create those read-only properties with very simple, clean syntax. That'll clean up a lot of code for me.

But finally we still have the scenario where properties have side-effects, like calling the event trigger for INotifyPropertyChanged. This cannot be expressed with automatic properties and we're back to defining a separate private field. What I would love to see is some syntactic sugar along the lines of

public string Foo
 : private _foo
{
 get;
 set
 {
   string previous = _foo;
   _foo = value;
   if(previous != _foo)
   {
     OnPropertyChanged("Foo");
   }
 }
}

I know this doesn't buy that much in syntax terseness (the get is shorter and you don't have to define the type of _foo), but it means that the private storage used by the property is part of the property definition, which makes the definition that much more easy to read, imho. That, or allow pre and post conditions for set in Automatic Properties, although the syntax i can think of for that doesn't seem cleaner than the above.

TDD & you can't test what you can't measure

Recently I've been dealing with a lot of bug fixing that I can't find a good way to wrap tests around. Which is really annoying, because it means that as things get refactored these bugs can come back. These scenarios are almost always UI related, whether it's making sure that widgets behave as expected, or monitoring the visual state of an external application I'm controlling (Live For Speed, in this case). What all these problems have in common is that the recognition of a bug existing can only be divined by an operator, because somewhere there is lack of instrumentation that could programatically tell the state I'm looking for. And to paraphrase the old management saying, "You can't test what you can't measure".

My best solution is the usual decoupling of business logic from UI. Make everything a user can do an explicit function of your model and now you can test the functions with unit tests. At least you business logic is solid, even if your presentation gets left in the cold. And depending on your UI layer, a lot of instrumentation can still be strapped on. WinForms controls are generally so rich that nearly everything you would want to test for you can probably query from the controls. But things that you can test and see within a second may be a lot of lines of code to test programatically and of course go right out the window when you refactor your UI. And if your trying to test the proper visual states of a DataGridView and its associated bindings, then you're in for some serious automation pains.

I know that business logic is the heart of the application, but presentation is what people interact with and if it's poor, then it doesn't matter how kick ass your back end is. So for the time being that means that UI takes a disproportionate amount of time to run through its possible states and it's something I would like to handle more efficiently.

LFSLib 0.16b w/ InSim Relay support released

With this version, LFSLib.NET gains support for the InSim Relay currently in testing (see this Forum thread for details). The InSim relay allows you to get a listing of servers connected to the relay and send and receive InSim packets to them without having to directly connect to them. To create an InSimHandler to the relay simply call:

IInSimRelayHandler relay = InSimHandler.GetMasterRelayHandler();

This server isn't up yet, so for testing Victor has set up the following server:

IInSimRelayHandler relay = InSimHandler.GetRelayHandler("vic.lfs.net", 47474);

IInSimRelayHandler implements a subset of the main InSimHandler, discarding methods, properties and events that don't make sense and adding a couple of relay specific ones.

The remainder of the changes are minor tweaks and bug fixes:

  • Added auto-reconnect feature to handler (really only works for TCP, since UDP is connectionless)
  • CameraPositionInfo now allows ShiftU & related properties to be set
  • Updated TrackingInterval to accept values as low as 50ms
  • BUGFIX: Fixed race condition where a LFSState event could be fired on a separate thread before the main thread noticed that LFS was connected, allowing for invalid NotConnected exceptions to occur.
  • BUGFIX: Fixed Ping event (was looking for the wrong packet)
  • BUGFIX: RaceTrackPlayer.HandicapIntakeRestriction always returned zero

Full details are in the release notes.

All links, docs, etc are at lfs.fullmotionracing.com

Observable event subscription

The other day I was trying to create an explicit interface implementation of an event and ran into a snag. Now I've done many explicit interface implementations before, but, I guess, never of an event, because I'd not seen the below compiler error before:

An explicit interface implementation of an event must use property syntax

Huh? I did some searches on that error and the best reference was this page. I had long forgotten that the whole += and -= was closely related to how properties work. But instead of using get and set you can create your own logic for subscribe and unsubscribe using add and remove.

As Oliver points out on his blog, it does mean that your event just becomes a facade and you still need another event that actually is being subscribed to. So for that explicit interface implemenation you'd get something like this:

interface IFoo
{
   event EventHandler FooHappened;
}

class SomeClass : IFoo 
{
  private event EventHandler internal_FooHappened;

  public event EventHandler IFoo.FooHappened
  {
    add { internal_FooHappened += value; }
    remove { internal_FooHappened -= value; }
  }

  private void OnFooHappened()
  {
    if( internal_FooHappened != null )
    {
      internal_FooHappened(this,EventArgs.Empty);
    }
  }
}

Now this is really an edge case, imho. But the same syntactic trick does allow for something else that's pretty cool, i.e. being able to be notified when someone subscribes to your event. I've had it happen a number of times, where it would have been useful to take some action on subscription. Maybe the event needs some extra resource, that you don't want to initialize unless someone needs it. It may even be that the real event handler is that resource. So now you can use the above syntax and put extra logic inside of the add to initialize that resource, like this:

class SomeClass
{
  SomeResource myResource;

  public event EventHandler FooHappened
  {
    add
    {
      if( myResource == null )
      {
         myResource = new SomeResource();
      }
      myResource.FooHappened += value;
    }
    remove
    {
      myResource.FooHappened -= value;
      if( myResource.FooHappened == null )
      {
         myResource.Dispose();
         myResource = null;
       }
    }
  }
}

LFSLib.NET 0.15b released

Although 0.15b was supposed to be the version that brings LFS proxy support to LFSLib.NET, a crash-bug with AXI & AXO packets forced me to rollback that incomplete code for now and release a bugfix version.

The bugs fixed in this version are:

  • AXI & AXO packets were not properly received and killed the TCP handler
  • Eliminated race condition in static population of encodings
  • Fixed bug with text-less button
  • Socket clean-up in UDP handling could throw unexpected exceptions
  • OutSim & OutGauge used bad buffer sizes if ID was 0 and would throw an OutOfRangeException
  • Close() could throw NotConnected exception, which was useless

The remainder of the changes are clean-up and non-breaking, forcing some more parts of the API into obsolescence.

Full details are in the release notes.

All links, docs, etc are at lfs.fullmotionracing.com

Playing with Xaml and LFS

I thought it was about time to see how hard it would be to get Live For Speed Track Maps into Xaml for use on the desktop or Silverlight. I initially started with the LFS's SMX (simple mesh format) files, but taking triangle data and turning it into a manageable number of polygons for vector representation is proving to be an interesting exercise. I might get back to it later, but for now, have found a more productive approach.

LFS provides a file format called PTH which basically contains the drive line in world coordinates as a set of nodes for each track configuration. But here's cool part: Every node also includes orientation and the left and right distances from the node to the track edges and drivable area edges. Given the left and right edges from two nodes describes a rectangle. I decided to render this out as a series of polygons in Xaml and the results was a very usable representation of the track.

But this still generates a lot of polygons for such a relatively simple shape. I initially went down the path of just skipping over nodes and doing a reduction of detail that way. This approach, however, quickly deteriorated the detail significantly. My next approach was to take the edges of multiple node polygons and simply draw a single polygon encompassing them all. The file size didn't drop as significantly, but the detail didn't drop at all. Below is the comparison of the two polygon reduction techniques:

Skipped Node Polygon Reduction Edge following Polygon Reduction
2 nodes/polygon
At this stage the two outputs are identical. And xaml files sizes are identical at about 90KB.
4 nodes/polygon
You can see a little detail reduction in the skipped node version. But file size reductions are drastic. 31KB for the skipped version, 40KB for edge-following.
8 nodes/polygon
Now, the level of detail reduction in the skipped version is becoming noticable. Since this is a vector format it lends itself to panning and zooming, but at this point the loss of detail would become annoying when zoomed. However, file size reductions continue to out pace the edge following with 14KB and 27KB respectively.
16 nodes/polygon
At this stage the skipped node format is basically unusable as a representation, while the edge following still looks as crisp as the original. File size for skipped has again halved, but really is not useful. Edge following is at 21KB, clearly slowing down in file size reduction

I continued with the edge following polygon reduction technique up to 128 nodes per polygon. However by 64, the file wasn't getting any smaller anymore (the verbosity of Xaml was negligible compared to the number of polygon points. The best final file size was about 17KB, which is quite acceptable for a Silverlight application, if you ask me.

Next I need to hook this Xaml up to my InSim code and then I'll be able to render race progression. This'll have to be inside of a WPF app for now, until Silverlight's CLR gets Sockets and I can port LFSLib.NET to it. Right now, i'd have to do constant server callbacks which wouldn't scale or perform well.

Lfs State Inspector

I've been spending a lot of time, just firing up LFS, hopping through screens and catching the state events. But there's a lot of state and no good way to see what's changed (most is obvious, but some of it isn't). So i threw together a quick state inspector app that just reports the current state and uses color highlights what values have changed. In addition, the shade of the color indicates how many state changes ago it happens, as there are certain actions that cause multiple state change events to be sent and then it's easy to miss a change.

I figure, it's also a nice simple example of LFSLib.NET, so i'm releasing it both as binary and source for people to play with.