I've just put v0.12b on the server. This version does bring LFS 0.5X compatibility, but it does not expose the new functionality. The goal of this version is primarily to let 0.11b code work against LFS 0.5X with the least amount of changes (some data just isn't represented the same anymore and caused breaking changes). This version is also ill tested, as I'm not going to go through an exhaustive set of tests until i get full 0.5X feature exposure (yeah, I know, i should be doing this test driven.. But that means a mock LFS or an LFS test harness and I haven't had the patience for that).
So what has changed? Here are the notes:
Moved to .NET 2.0 and Visual Studio 2k5
Swapped out back-end for dealing with InSim packets to more closely resemble the native LFS formats and removed the intermediate Msg* classes (all internal changes)
Finally mostly functional Unicode support for LFS message in and out transfers
Updated existing functionality to work against patch X, with some breaking changes in what is available in Events and how that data is represented.
Full patch X protocol implemented, just not yet exposed, so new packets don't come across as unknown packets
An extra note on "mostly function Unicode support". Basically for conversion of unicode to LFS encoding, I am using a lookup table from unicode to codepage+byte and there are a couple of LFS specific characters that screw up under Japanese encoding. And split messages have some issues. This will be fixed in 0.13b.
The new version can be found here and the home of the lib is here
Let's say you have n objects implementing an interface. Call these message, implementing IMessage. Each message may cause a different action when given to another object, the receiver. Sounds like the perfect scenario for Polymorphism. Create a method Handle() for each of the nmessages specifying the concrete type in the signature. Now, let's say you receive that message from a MessageFactory and, obviously, that factory hands you the message via the interface. So here's the pseudo code:
public interface IMessage
{
}
public class MessageA : IMessage
{
}
public class MessageB : IMessage
{
}
public class Receiver
{
public void Handle(MessageA msg)
{
}
public void Handle(MessageB msg)
{
}
}
public class MessageFactory
{
public static IMessage GetMessage()
{
}
}
Now, how do you pass this message off to the appropriate Handle() version? You can't just call Handle() with your message because you message is actually an IMessage not the concrete type. I can think of three workarounds and don't like any of them. Is there another way, or is this just a limitation of C#? In the meantime, here are the three ways I can think of solving this:
The traditional way, use if/else or switch to examine the incoming object and dispatch it. Just create a method Handle(IMessage msg) and have it dispatch each to the proper Handle(). Traditional, simple. But as n increases it becomes less and less readable and dispatch is now a runtime decision. It could just fail, if you overlooked a concrete class.
At runtime examine the actual type of the message and then inspect your receiver to dynamically invoke the appropriate methods. No worries about maintaining the code or readability. It's just magic... but runtime magic which may yet fail if you overlooked a concrete class.
There is probably a named pattern that goes with this approach. It shares some aspects with the Visitor pattern (that's what inspired me actually), but I don't know what this pattern is actually called. The idea is that each message doesn't get passed to the receiver but passes itself. I.e. it calls the appropriate method. For this to work, we add a method to our interface:
public interface IMessage
{
void Dispatch(Receiver receiver);
}
public class MessageA : IMessage
{
public void Dispatch(Receiver receiver)
{
receiver.Handle(this);
}
}
This works perfectly. After getting the message from the factory, you just call msg.Dispatch(receiver). Since each message implements its own dispatch for itself, we get compile time checking. Yay. But we basically copy and paste the Dispatch() method into every message class and we can't factor it into a common base-class, because that would break the flow control again. So it achieves the ends I'm after, but I don't like the implementation.
Came across an artifact with generics that confused me. Let's say you had an interface IFoo:
public interface IFoo
{
[...]
}
and a class Foothat implements IFoo:
public class Foo : IFoo
{
[...]
}
then a given method returning IFoo doesn't have to explicitly cast Foo to the interface:
public IFoo Foo()
{
return new Foo();
}
So, I figured that the same would be true for a generic method returning T with a constraint to IFoo. However, it seems that the constraint is not considered for implicit casting to T. So you have to explicitly cast Foo to IFoo and then cast that to T:
public T Foo<T>() where T : IFoo
{
return (T)(IFoo)new Foo();
}
Ok, I've now twice wasted time on trying to figure this out and this time I'm writing it down, so I don't do it a third time :)
I recently added an eclipse project to SVN (it's already in perforce, my usual SC of choice) and after that my JAR export would fail with a bunch of "Resource is out of sync" errors. Since all the resources were .svn related, I blamed SVN and chased this down the wrong path for a bit.
Here's the simple solution: If you see a jar export fail because of "resource is out of sync", try doing a Refresh on the project before anything else. That should solve it.
I was just trying to move a remoting service from .NET 2.0 to mono and ran into problems with Interop. The client & server would work fine if running on the same architecture, but fail when crossing architectures. Turns out that the secure portion of TcpChannels isn't completed yet on mono, throwing an AuthenticationException when a Windows client tried to connect to a Linux server. The culprit is ISecurableChannel. To avoid this, simply, don't use the security feature for now (i.e. use at own risk, best on a private net).
While I was at MEDC, Mix07 was going on next door. I didn't even hear the cool Silverlight announcements until I got back from Vegas. But now that i've played with it, it's exactly what I had hoped for (minus a disconnected use model): We have a true VM to code against on the client side. That means the full breadth of .NET languages to control the reduced WPF presentation layer, plus with the DLR true scripting support at near-compiled performance.
As I expected, there was no linux support announced by MS and neither does it appear that they've opened a channel to the mono team. But Miguel de Icaza has addressed this issue independently with the Moonlight project and listening to the chatter on the dev list and following the wiki updates, that project is getting serious attention. I may even be better this way. I think a firefox plug-in coming from the mono team would be more willingly installed by linux users than something coming directly from MS.
We'll have to see how much more of the WPF set of Xaml gets supported by Silverlight as 1.1 moves to beta and RTM. Obviously, the way custom controls are done is not ideal right now--the way properties are acting against implementationRoot (haven't gotten deep enough to understand why it's not using the DependencyObject/Property pattern) and the shadowing of inherited properties just make for a strange experience compared to Controls in WPF, Windows Forms or even ASP.NET. Hopefully that morphs into a better model as we move along. If nothing else, the development of the promised suite of UI controls provide internal motivation to refine the model. Not that I think that DependencyObject/Property is an elegant coding pattern (at least not until that legwork moves into code generation).
But while everyone is busy committing the same UI crimes that many Flash developers have been guilty of for years, I think the real significance of Silverlight is not it's Xaml and rich media support. In my opinion the most amazing feat of Silverlight is its DOM interoperation. The Silverlight VM is accessible in both directions, complete with object serialization across boundaries. This means you can have managed code attached to DOM events, and managed code modifying the DOM itself or calling javascript in the page. With a minimum of glue, you could move all your code inside of Silverlight and even if you don't care about C# et al., you'd at least get a much higher performance javascript out of the deal. Then add the rich communications infrastructure and the hinted at socket level networking in future versions and suddenly you can have very fast, very interactive web applications that could be truly stateful and have better event handling for asynchronous operation etc.
Right now, you need some 1x1 pixel Silverlight object that everything gets channeled through, but MS has already promised that that dependency is going away. So what you really have is a VM and rich framework addressable with bytecode compiled from a huge number of potential languages in the browser. Whether you use that VM to drive your HTML, Silverlight's presentation layer or even the canvas tag is up to you. It's what I hoped that the browser manufacturers would do themselves, but as a plug-in, the chance of this capability becoming ubiquitous is even better, imho.
Silverlight 1.1 alpha is clearly a very early peek into the tech, but it works just great already. I'm curious what it will look like by the time it's an official release. And I certainly hope that by that time, MS has given the disconnected, self-hosted option some consideration. After all, when competing with Flash/Flex, one shouldn't ignore Apollo.
Trying to reconfigure mail once again (a rant for another time) and trying out a couple of VMware Virtual Appliances. But every appliance I tried on my server would just appear to hang. Black screen, no bios, no nothing. Finally found a post on the VMware forums talking about permissions, so I compared my working images to the non-working ones and the problem was simply that the .vmx file needed to have execute permissions.
Disclaimer: I've been trying to store some text resources in my jar so that i can easily package uncompiled content with my executable. This course of action seemed like the natural thing to do. However what I had to do to actually access this file and allow for different environments like running from eclipse vs. executing the jar makes me wonder if I'm missing some essential facilities. I.e. can eclipse build the jar and execute it instead of running it's class files, so that you can test jar stored executions as part of eclipses normal debug and excution? And is there some facility that let's you just say InputStream res = App.GetResourceStream()?. Well, in the meantime here's what i cobbled together to get this working, since my googling didn't come up with a single source that put it all together.
First thing, we need to determine where we are executing, both because its the base for finding our resource and because it tells us whether we're dealing with a directory hierarchy or a jar file:
ProtectionDomain protectionDomain = this.getClass().getProtectionDomain();
CodeSource codeSource = protectionDomain.getCodeSource();
URL location = codeSource.getLocation();
File f = new File(location.getFile());
Now we get an InputStream, depending on what environment we're in:
String resourcePath = "resources/xsl/register.xsl";
InputStream resourceStream = null;
if (f.isFile()) {
// it's a file, so we assume it's a jar and look for our
// resource as a JarEntry
JarFile jar = new JarFile(f);
JarEntry xslEntry = jar.getJarEntry(resourcePath);
resourceStream = jar.getInputStream(xslEntry);
} else {
// it's a directory, so we just append our relative resource
// path
resourceStream = new FileInputStream(f.getAbsolutePath() + "/"
+ resourcePath);
}
Now we have an InputStream and can ingest the file anyway we like.
This may be ancient news, but I just came across an article that strongly implied that the reason .NET came about was because Sun didn't like Microsoft's addition of delegates to J++. That surely is a condensation of events, but it's certainly an interesting yarn.
I got into this whole subject because I needed an object to subscribe to the state change of another object in my java project. And I didn't want to create a tight coupling between otherwise unrelated objects. In C#, I would have created an event for the state change and have the second object subscribe to it. Done.
Alas, I'm in java and don't have delegates, e.g. no events in the way i'm used to. I remembered dealing with creating lots of anonymous inner classes during some long ago experiences with swing programming, which I always found to be less than transparent in presentation. So I figured I'd do some digging to see how delegation and events should be handled in java. I don't like seeing getFoo()/setFoo() from someone not used to C# properties, and I don't want to subject someone else to my C#-ing of java code in return.
I started googling java, delegates and inner classes and came across a plethora of interesting articles, including the obligatory C# is better better than java because it has delegates and java kicks C#'s ass because isn't littered with atrocious syntactic sugar like delegates when inner classes will do variety. Of these, the most entertaining was a rather testy condemnation of delegates in J++ in the form of a white paper on Sun's site. Considering the stance Sun has taken over the years on java, i.e. binding the language, runtime and philosophy into a single indivisible unit, having MS try subvert the language with what to many looks like a procedural programming throwback certainly could have been a significant motivator for the lawsuit that revoked Microsoft's java license. Looking at C# and the CLR, Microsoft obviously saw a lot of things it liked in the java language and the jvm. So the result that .NET came about because Sun rejected delegates doesn't seem too far fetched and my favorite version of the story so far.
Now, back to the problem at hand, how does one do delegation in java and the answer does appear to be with inner classes functioning as callbacks. This certainly does the trick. But that's a bunch of code and interfaces to create which in the end doesn't improve the readability of the code. As an illustration, here is the C# code and the java code I created to get the same effect. Note: I didn't need the extra information that C# events provide, i.e. the event source and event arguments, so I left them out of the java version to keep the code more concise. I also didn't do any checking if there are subscribers, etc -- read: this is an illustration not production code :)
/ C# Publisher
public class Publisher
{
// create the event, which implicitly gives us add/delete subscribers
public event EventHandler someAction;
public void DoAction()
{
Console.WriteLine("Start action");
//implictly call all subscribers
someAction(this, EventArgs.Empty);
Console.WriteLine("End action");
}
}
// java Publisher
public class Publisher {
private List<EventHandler> subscribers = new ArrayList<EventHandler>();
public void subscribeToAction(EventHandler notifier)
{
subscribers.add(notifier);
}
public void doAction()
{
System.out.println("Start action");
for( EventHandler subscriber : subscribers ) {
subscriber.handle(this);
}
System.out.println("End action");
}
}
//C# Subscriber
public class Subscriber
{
private string name;
public Subscriber(string name)
{
this.name = name;
}
public void AttachToPublisher(Publisher publisher)
{
// subscribe to the event. This creates a closure for this particular
// instance of Subscriber.
publisher.someAction += new EventHandler(RespondToAction);
}
void RespondToAction(object sender, EventArgs e)
{
Console.WriteLine("Responding to action for '" + name + "'");
}
}
// java Subscriber
public class Subscriber {
private String name;
public Subscriber(String name) {
this.name = name;
}
public void attachToPublisher(Publisher publisher) {
// create a new anonymouse instance of the EventHandler
// as a closure for this instance of Subscriber
publisher.subscribeToAction(new EventHandler() {
public void handle(Publisher publisher) {
respondToAction();
}
}
);
}
private void respondToAction() {
System.out.println("Responding to action for '" + name + "'");
}
}
In C# this is just built in plumbing. In java we create a simple interface that our anonymous inner class will implement:
public interface EventHandler {
void handle(Publisher publisher);
}
Now we exercise the code:
Publisher p = new Publisher();
Subscriber s1 = new Subscriber("abc");
Subscriber s2 = new Subscriber("xyz");
s1.AttachToPublisher(p);
s2.AttachToPublisher(p);
p.DoAction();
The java code is virtually identical just with different casing for code style and both produce this output:
Start action
Responding to action for 'abc'
Responding to action for 'xyz'
End action
Now add lost of different events and unsubscribing of events, plus more complex EventHandlers and the amount of code you end up writing quickly becomes significant. If there is one thing object oriented programming encourages us to do is to take repetitive code patterns and formulate reusable objects. Plenty of people in the java community have created delegate-like helpers that make delegation easier to read and maintain than simple inner classes, my favorite so far being Alex Winston's strongly typed approach.
So are delegates just syntactic sugar or a throw-back to procedural coding? For those with only a cursory understanding of delegates, they do just look like function pointers, like C, or at best, type-safe function pointers. But just like inner classes they create instance specific closures, plus they throw in functionality for handling multi-casting and handling synchronous and asynchronous invocation of the closure. I, at least, think delegates as a first-class citizen of the runtime make life easier, improve readability and do not detract from the object oriented nature of the surrounding code.
I just built a new linux dev machine and decided to go Ubuntu this time around to see what all the excitement is about. I had been a Redhat/Fedora Core desktop user since 1999 and only recently switched to Centos for my servers.
On the hardware side, I've been a big fan of the Asus Vintage line of barebone machines and have a number of them doing simulator duty for Full Motion Racing. They're a great machine for the money. The box I ended up with was the Asus Vintage V2-P5945G. Another clean and simple design.
I had everything assembled in no time and popped in the Ubunto 6.10 CD. It booted into the Ubuntu Live desktop, i.e. a fully functional distro running straight off CD without any HD requirements. On the desktop was an "install Icon" which ran a very simple wizard to take care of installation. All around a slick, easy and trouble-free process. Well, almost. I had no network connectivity. It didn't even see the network device nor did I get a link light.
After some research I discovered that this mobo (along with a bunch of other Asus mobos uses an Attansic Gigabit Ethernet device that isn't supported by the current Ubuntu. For that matter it only made it into the general kernel in January. So I tried again with Ubuntu "Feisty Fawn" and this time everything went of without a hitch.
We'll see how I like Ubuntu in time, but the install experience is certainly one of the slickest I've seen to date, more newbie friendly than Windows or Mac. Granted, the inability to find my ethernet adapter should have been handled a bit better, if only to give a warning that none was found. Really, if you can't find an ethernet device, chances are it's a sign that something is up.