Being an eclipse newbie, setting up third party JARs was a bit painful to figure out. Well, not painful as such, just painful once i tried to move the project across platforms.
See, I was using the External JARs... option in the Build Path dialog. But that set up absolute paths, which is a bad idea even if you stay on the same platform. Going across platforms C:\.. just wasn't an option. So i tried editing the .classpath by hand. That just created tied the path to the root. Finally I figure out that if i added a lib directory to the project, the JARs I put inside of it, were now browsable by the Add JARs option. Now everything build happily across Mac and Windows. Joy.
Last time I used Java, JAXB was just being released. At least at the time, it was all about Schema definition and code generation. I loved the concept of serializing straight into XML and back, but code generation always is a bit of a nasty beast, imho. Plus, adding custom code to your class was also annoying. You basically had to extend your generated class to add anything custom to it.
When I started working with .NET, I was pleasantly surprised by their Attribute driven approach. Now the code and the XML definition was in one place and no post processing was required.
Back in Java land, I thought that with Java 5.0's addition of Annotations, maybe the meta-data code-markup method had been picked up as well. Sure enough there is JAXB 2.0 Reflection. However, if all I want to do is some config files and maybe some data serialization without the full Web Services juggernaut, that seems like a heavy handed way of doing it.
A little further digging brought me to Simple. One tiny jar and you are ready to define, serialize and deserialize your classes. Woot. Sure, my .NET bias is showing, but I really like this approach so much better than starting at the XSD/DTD and generating code. And creating code that let's you move objects from one side to the other is dead simple:
java
@Root(name="example")
public class Example {
@Element(name="message")
private String text;
@Attribute(name="id")
private int index;
public Example() {
super();
}
public Example(String text, int index) {
this.text = text;
this.index = index;
}
public String getMessage() {
return text;
}
public int getId() {
return index;
}
}
C#
[XmlRoot("example")]
public class Example
{
private string text;
private int index;
public Example()
{
}
public Example(string text, int index)
{
this.text = text;
this.index = index;
}
[XmlElement("message")]
public String Message
{
get { return text; }
set { text = value; }
}
[XmlAttribute("id")]
public int Id
{
get { return index; }
set { index = value; }
}
}
The main difference is that the Simple approach doesn't require there to be a public setter on the data, which is actually kind of strange. How does deserialization set the field? Does it use reflection Regardless, two pieces of code, looking quite similar, allowing you to serialize a class from C# to Java and back without a lot of work.
I've been doing java again for a project for the last few weeks and it's been fun. You do get spoiled from the clean syntax C# Properties give you over the getter/setter pattern. And talk about notation differences. I'm so used to Pascal Case on everyting except for fields and parameters that writing Camel Case on methods is taking some getting used to again. And don't even get me started on switching back and forth between Eclipse and VS.NET several times a day. My muscle memory is in shock. But once you get your mind in the right frame for the idiosyncracies, everything else is so similar is scary, especially with the Java 5.0 additions.
The difference, however, between the two platforms that I was always on the fence about is Checked Exceptions:
In Java, I love that I can declare what exceptions a method may throw and the IDE can pick this up and let you make intelligent decisions. In C#, you have to hope there are some mentions in the docs and otherwise you just find out at runtime and either have a catch all or add individual catches as you find exceptions.
But then, checked exceptions are in my mind the single most responsible party for the good old catch {} gem. And it's unavoidable. There are a number of cases where a non-type-safe argument is required, which, if it was wrong, would throw an exception. However, most of the time that argument isn't dynamic, but some constant you define. Really you know that the exception you are forced to handle will never be thrown and you put a catch {} in there, feeling justified. Soon enough the guilt is gone and you start using it when you just don't feel like dealing with that exception right now. Suddenly you're swallowing exceptions that someone up the chain might have cared about or that should have legitimately bubbled to the top. Bah.
Being a rabid fan of anything that makes code requirements machine discoverable, not being able to declare my exceptions feels dirty. And even for human discovery, documentation isn't sufficient, since it only covers the surface level, i.e. what I throw. Now, if i use someone else's code in there, i need to hope they were also diligent and I have to add any exceptions they may throw and I don't handle to the ones i may throw. Yeah, thatdocumentation is highly likely to fall behind the reality of the code.
Wouldn't it be nice if we could declare our exceptions as informational. Like throws but you don't have to deal with it. Huh, wouldn't that be meta-data about your code, i.e. a perfect candidate for Attributes ne Annotations? I've seen rumblings here and there about this, but nobody's ever picked it up seriously, that I could find. I for one, would love it if I could say
[Throws("InvalidOperationException,FooBarException")]
public void SomeMethod() { ... }
and have the IDE pick up on it to let me know and optionally generate warnings about things I'm not catching. I think that would be the best of both worlds.
If you've had the misfortune of mentioning AJAX in my presence, then you've heard me rant about the crappy user experience we are all willing to accept in the name of net connectness. This really is a lamentation about the state of Rich Internet Application Frameworks and my dislike for coding in Javascript. Well, it looks like there are more choices than I'd been aware of (the choice of google search terms makes all the difference). Still not what I'd hope, but at least its getting more digestible.
Programming based on the AJAX technique has certainly done much to elevate the quality of web apps, but I still feel they are always just a pale facsimile of good old desktop apps. Even the best webapp UI is generally great "for a webapp". However lots of libraries are emerging, as are s number of widget sets, so that's definitely improving. While most toolkits let you extend them, you're always doing your server in one language and your custom UI in javascript.
What I personally have hoped for was a VM in the browser that was addressable by a number of compilers creating bytecode. Everytime I see a platform that runs a VM underneath and doesn't let you address that VM directly, I feel like a great opportunity has been missed. Oddly, MS' CLR is the best example of a VM that lets you address it in virtually any language. They certainly didn't invent the concept but they've promoted it. I think Sun did a major disservice to itself and VMs in general when they married Java the language, the Virtual Machine and the Religion into a single marketing entity. I mean who even knows that there are lots of languages that can be used to target the JVM?
A while ago I found a post by Brendan Eich talking about the future of the Mozilla VM and mentioned mono and the jvm as options. Yesterday, he posted about open web standards and I seized the opporunity to ask about bytecode addressability of JS2's VM. His answer about legal issues is likely a big reason why mono was abandoned as an option:
"there won't be a standard bytecode, on account of at least (a) too much patent encrustation and (b) overt differences in VM architectures. We might standardize binary AST syntax for ES4 in a later, smaller ECMA spec -- I'm in favor."
But as he also pointed out there is always compiling to Javascript instead of bytecode. The options he and another poster mentioned were:
Looking around for Javascript compiler's I noticed that this approach is also under development at MS as Script# although it hasn't yet moved up to an official MS project. Interestingly this does pit MS vs. Google once again, framed in a C# vs. Java context with ASP.NET AJAX w/ Script# and GWT. And if there's anything that's just as great for innovation as open standards, in my opinion, it's competition between giants. I look forward to seeing them try to outdo each other.
So far, we're stuck in the browser, and even with tabs, I sure hope this isn't the future of applications. If we are to move beyond browser, what are our options?
Clearly, Adobe is leading the RIA platform wars with Flash and with Flex, SWF certainly looks more like a platform than an animation tool forced to render User Interfaces.
And Apollo certainly looks to push Flash as a platform as well as making it a stand-alone app platform. I certainly think this is going to be the juggernaut to beat. Given my dislike for the syntax of (Java|Ecma|Action)Script, it's unlikely to be my platform of choice. And i don't see a Adobe supporting cross-language compilation and support for Eclipse or Visual Studio at the expense of their Dev suite.
I really like the concept of WPF. It's philosophy is what I want the future to look like. Mark-up and code separate, common runtime addressable in many languages on client and server, well developed communications framework. Ah, it warms my heart.
But, a) it's closed, b) it's only Windows (i'll get to WPF/E in a sec) and c) boy, is it over-architected. Now, it's at 1.0 release and if there's anything about MS releases, they seldomly get it right in 1.0. We'll see what 2.0 looks like.
WPF/E looks like a combination of simplifying WPF (is this 2.0?) and going after Adobe. And with Script# and recent admissions of some type of CLR on Mac for WPF/E, we're looking at a trojan horse to get Rich Internnet Application Development in .NET established in both the browser and the desktop across platforms. Unfortunately, "across platforms" for MS still means Windows and Mac, while Adobe's Flash 9 has demonstrated their dedication to encompass the linux sphere as well. I don't think that's going to change... I just don't see MS going to linux with the CLR and I find the likelyhood of them leveraging mono's efforts just as unlikely. I wouldn't mind being wrong.
This isn't a platform per se, but I've seen a lot of cool tech demo's using XUL and/or the canvas tag. Also looking at the work that Micheal Robertson's AJAX13 has done, I think there are the makings of a stand-alone app platform here. If your runtime requirements are "install firefox, but you don't even have to use it as your browser if you don't want to", that's a pretty small barrier for a platform that runs everywhere. Personally, I hope someone straps mono or the recently liberated jvm to XUL and builds a platform out of it (you'd get mature WS communication for free with either), because of all the options that looks the most appealing to me personally.
Considering that GWT and Script# had eluded my radar up until today, I'm sure there's even more options for RIA's out there. I just hope that people developing platforms take the multi language lessons of the legacy platforms to heart. All the successful OS's of the past offered developers many ways of getting things done. You looked at your task, picked the language that was the best fit to the task and your style of programming and you delivered your solution. VMs have shown that supporting many languages in an OS independent way is viable, so if you're building a platform now, why would you choose to mandate a language and programming model. I sure hope that the reason for not going this route isn't going to be "because the patent system is stopping me" -- that would be the ultimate crime of a system that was supposed to foster innovation.
The RangeValidation code I created turned out to be of limited usefulness because Attributes are pretty picky about what they can take as arguments. This meant that I couldn't even create the decimal version, which was the original motivator. Revisiting the design I refactored the range validation code into a generic Attribute paired with a Helper class using generics to be type-safe. This also changed the usage to the following:
/// <summary>
/// Valid from 0 to 10
/// </summary>
[ValidateRange(0, 10)]
public decimal Rating
{
get { return rating; }
set
{
ValidateRange<decimal>.Validate(value);
rating = value;
}
}
/// <summary>
/// Valid from 0 to Int.MaxValue
/// </summary>
[ValidateRange(Min = 0)]
public int Positive
{
get { return positive; }
set
{
ValidateRange<int>.Validate(value);
positive = value;
}
}
The ValidateRange class now simply takes the values and stores them as objects, leaving interpretation to the generic class ValidateRange<T>.
[AttributeUsage(AttributeTargets.Property)]
public class ValidateRange : Attribute
{
object min = null;
object max = null;
/// <summary>
/// Validation with both an upper and lower bound
/// </summary>
/// <param name="min"></param>
/// <param name="max"></param>
public ValidateRange(object min, object max)
{
this.min = min;
this.max = max;
}
/// <summary>
/// Validation with only upper or lower bound.
/// Must also specify named parameter Min or Max
/// </summary>
public ValidateRange()
{
}
public object Min
{
get { return min; }
set { min = value; }
}
public object Max
{
get { return max; }
set { max = value; }
}
}
The generic class ValidateRange<T> still has the static Validate accessor which has to be used in the property that is tagged with the ValidateRange attribute, since it divines the property from the StackTrace. But given a PropertyInfo instance, the class can also be instantiated independently, so that Validation code can inspect the range requirements and communicate them to the enduser, rather than only reacting to the exceptions thrown by the Property in question.`` ```
public class ValidateRange<T> where T : IComparable
{
/// <summary>
/// Must be used from within the set part of the Property.
/// It divines the Caller to perform validation.
/// </summary>
/// <param name="value"></param>
static public void Validate(T value)
{
StackTrace trace = new StackTrace();
StackFrame frame = trace.GetFrame(1);
MethodBase methodBase = frame.GetMethod();
// there has to be a better way to get PropertyInfo from methodBase
PropertyInfo property
= methodBase.DeclaringType.GetProperty(methodBase.Name.Substring(4));
ValidateRange<T> validator = new ValidateRange<T>(property);
validator.CheckRange(value);
}
bool hasMin = false;
bool hasMax = false;
T min;
T max;
PropertyInfo property;
public ValidateRange(PropertyInfo property)
{
this.property = property;
ValidateRange validationAttribute = null;
try
{
// we make the assumption that if the caller is using
// this object, they defined the attribute on the passed property
validationAttribute
= (ValidateRange)property.GetCustomAttributes(typeof(ValidateRange), false)[0];
}
catch (Exception e)
{
throw new InvalidOperationException(
"ValidateRange Attribute not defined", e);
}
if (validationAttribute.Min != null)
{
hasMin = true;
min = (T)Convert.ChangeType(validationAttribute.Min, min.GetType());
}
if (validationAttribute.Max != null)
{
hasMax = true;
max = (T)Convert.ChangeType(validationAttribute.Max, max.GetType());
}
}
public bool HasMin
{
get { return hasMin; }
}
public bool HasMax
{
get { return hasMax; }
}
public T Min
{
get { return min; }
}
public T Max
{
get { return max; }
}
private void CheckRange(T value)
{
if (HasMax && value.CompareTo(max) == 1)
{
throw new ArgumentOutOfRangeException(
property.Name,
"Value cannot be greater than " + max);
}
else if (HasMin && value.CompareTo(min) == -1)
{
throw new ArgumentOutOfRangeException(
property.Name,
"Value cannot be less than " + min);
}
}
}
One aspect of writing code that has always bothered me is how to communicate legal values. Sure, if the type that a Property accepts is of your own making then the onus is on the one who created the instance to make sure it's a legal value. But so many Properties are value types and not every value is valid. Now, you say that's why we have validators, it's their job to check the validity and tell the user what they did wrong. Fine. That's the UI element for communicating the problem. But what always seemed wrong was that in your UI code you specified what values were legal. And if the use of business level logic in UI code wasn't bad enough, then how about the question of where the valid values came from anyway? Did the programmer inately know what was valid or did the values come from the documentation? Neither allows programatic discovery.
Now, maybe i'm late to the party and this has been obvious to everyone else, but at least in my exposure to .NET programming Attributes have only cropped up occasionally--when the platform provided or required them (primarily Xml Serialization). But I'm quickly coming around to creating my own. They're an easy way to attach all sorts of meta data that would usually be buried in a documentation paragraph.
Anyway, the other day I was creating an object model that included a number of numeric properties with limited legal ranges. So I a) documented this range limitation and _b)_created a helper class that would range check my value and throw ArgumentOutOfRangeException as needed. Standard stuff. Then I exposed the class via a PropertyGrid which got me add Design Time support via attributes.
If you haven't ever created a User Control or Custom Control for others to use, you may not be familiar with these attributes. Once you drag your own Control into a form all your public properties become visible as properties in the Property window under Misc. This isn't always desirable or at least not self-explanatory. To help make your controls easier to use, you can use the [Browsable(false)] to mark your property as inaccessible for design time support. If you do want to use them at design time, Description and Category are two important attributes for making your Control more accessible, providing a description and a proper Category label for your property, respectively. In general, there are a lot of useful Attributes in System.ComponentModel for Design Time support.
Adding these Attributes, I decided that the same mechanism could be used to communicate valid values for any concerned party to divine. It looks like this had already been considered by MS, but only as part of Powershell, with the ValidateRange Attribute. So I wrote my own interpretation, complete with static helper to perform validation. This last part makes for maintainable code, but i'm not sure of performance overhead introduced, so use with caution.
The implementation is type specific and shown here as an Int range Validator. Unfortunately classes deriving from Attribute can't be Generic. Another nuisance is that Nullable types are not valid as parameters for Attributes.
namespace Droog.Validation
{
[AttributeUsage(AttributeTargets.Property)]
public class ValidateIntRange : Attribute
{
/// <summary>
/// Must be used from within the set part of the Property.
/// It divines the Caller to perform validation.
/// </summary>
/// <param name="value"></param>
static public void Validate(int value)
{
StackTrace trace = new StackTrace();
StackFrame frame = trace.GetFrame(1);
MethodBase methodBase = frame.GetMethod();
// there has to be a better way to get PropertyInfo from methodBase
PropertyInfo property
= methodBase.DeclaringType.GetProperty(methodBase.Name.Substring(4));
ValidateIntRange rangeValidator = null;
try
{
// we make the assumption that if the caller is using
// this method, they defined the attribute
rangeValidator
= (ValidateIntRange)property.GetCustomAttributes(typeof(ValidateIntRange), false)[0];
}
catch (Exception e)
{
throw new InvalidOperationException(
"Cannot call Validate if the ValidateIntRange Attribute is not defined", e);
}
rangeValidator.Validate(value, property);
}
int? min = null;
int? max = null;
/// <summary>
/// Validation with both an upper and lower bound
/// </summary>
/// <param name="min"></param>
/// <param name="max"></param>
public ValidateIntRange(int min, int max)
{
this.min = min;
this.max = max;
}
/// <summary>
/// Validation with only upper or lower bound.
/// Must also specify named parameter Min or Max
/// </summary>
public ValidateIntRange()
{
}
public bool HasMin
{
get { return (min == null) ? false : true; }
}
public bool HasMax
{
get { return (max == null) ? false : true; }
}
public int Min
{
get { return (int)min; }
set { min = value; }
}
public int Max
{
get { return (int)max; }
set { max = value; }
}
private void Validate(int value, PropertyInfo property)
{
if (max != null && value > max)
{
throw new ArgumentOutOfRangeException(
property.Name,
"Value cannot be greater than " + max);
}
else if (min != null && value < min)
{
throw new ArgumentOutOfRangeException(
property.Name,
"Value cannot be less than " + min);
}
}
}
}
To properly take advantage of this Attribute, we must both attach the attribute to a property and call the Validation method:
/// <summary>
/// Valid from 0 to 10
/// </summary>
[ValidateIntRange(0, 10)]
public int Rating
{
get { return rating; }
set
{
ValidateIntRange.Validate(value);
rating = value;
}
}
/// <summary>
/// Valid from 0 to Int.MaxValue
/// </summary>
[ValidateIntRange(Min = 0)]
public int Positive
{
get { return positive; }
set
{
ValidateIntRange.Validate(value);
positive = value;
}
}
Voila, Properties with range validation and anyone using this class can programatically determine the valid range and use that information in their UI, etc.
Turns out I was a bit quick on the trigger. Attributes are even more limited than I thought in the type of data that they accept. The moment i tried to create decimal version of the above, I got stuck. So what I will probably do is create ValidateRange which takes strings and then uses Reflection on the caller to determine what type the values should be cast to. It'll be more flexible as well, as I'll need only one ValidateRange instead of one per numeric type.
I know I'm not the only .NET developer for Smartphone. But it sure seemed that way when I tried to find a solution for a .NETCF 1.0 bug with multiline textboxes, hardly an uncommon Control, IMHO. After trying a number of different hacks, I finally have constructed one that seems to solve all problems. But what is this problem first of all?
When the textbox is drawn, it will paint the back on any area that contains text, but leave any empty areas unpainted, i.e. showing whatever was previously on the screen. This is aggravated by the fact that T9 textinput will often partially obscure the Textbox and once the T9 pop-up disappears the Textbox won't repaint the dirty areas. The closest thing to an official recognition of this bug that I could find was a post on the MSDN Forums that said it was known and to switch to .NETCF 2.0. Now, I would love to switch to 2.0, since Windows.Forms on 1.0 of .NETCF is missing a lot. Unfortunately all current Smartphones ship with 1.0. I could require users to upgrade their phones. But even if i just plug the ARM .cab out of MS's redistributable, that's still a 6MB install, not insignficant for loading over the phone network nor for the memory available on the device. And all that so they can run a 500KB app? Right. That path is not an option.
I initially hacked around the bug by first drawing the background and using a timer to draw the textbox itself a moment later. While this worked initially, it added a complication in having to hide the textbox to refresh it. Hiding it made it loose focus, so I had to catch that and refocus it. This in turn does screwy things with T9 text entry modes on most phones I tested.
The solution that does work without any sideeffects that i've found is simply to overlay the textbox with a background colored panel for a moment. Once the panel is removed, the Textbox repaints itself and any area not repainted because of the bug is left with the color of the overlay/. This does not affect focus, so it can be done at any time.
Let's assume you have TextBox _textBox_ and a Panel _wiper_ of equal size and placed overlapping in the form, as well as a disabled Timer _timer_ running at the shortest allowable interval. Finally you have a bool _wipe_ to indicate whether we've performed the wipe. With this setup the logic for "wiping" the TextBox clean is as follows:
That solves the problem of painting the areas the TextBox forgets. We can call Wipe() on Refresh() or pretty much any time we know we obscured the control. Wouldn't it be useful if Invalidate was virtual in 1.0 as well? ho hum. Unfortunately it's completely up to you to know when your TextBox is Invalidated and needs a wipe-down.
If you are handling the UI, that's manageable, but what about those T9 pop-ups? For all your program knows there is no such thing. Even if you derived a class off of TextBox and overrode its OnPaint(), you'd still never see a paint message, because the actual textbox painting caused by the T9 pop-up appears to happen at the OS level, below .NET. To detect a T9 pop-up, or let's say, to infer that it's happening, you have to watch both the KeyDown and KeyPress events. As soon a s the T9 pop-up happens you won't be seeing your characters come through as KeyPresses anymore. Instead you will see KeyDown events being fired with a KeyCode of ProcessKey. The next time you see a KeyPress event, it's all the keys of the completed word being spewn into the TextBox. Therefore, if you see a keypress come in after a ProcessKey, you know a T9 pop-up just closed and it's a good time to wipe down your code>TextBox. Given a flag bool _inT9_, the code would look look this:
At the end of the day this is all fairly elaborate just to use a standard multi-line code>TextBox, but I have not found another way to do it on a stock Smartphone under .NETCF. I'd be glad to be proven wrong, but in the meantime, at least this is an option.
I've been doing quite a bit of work with the HTC Dash Smartphone. One thing all smartphones (and all phones really) is to go to sleep quickly to save battery. Well, on the Dash, when you push any button, it just wakes up and doesn't act on the button. Seems like the proper behavior, imho.
Then i switched over to the Nokia 2125 and it would wake up and then take the action appropriate for the pressed button. Really? I mean, don't execute an action that I can't even know the effects on until the screen is on and i can see the context. Finding the behavior annoying, I blamed Nokia for creating a bad implementation.
That got me curious, so I took a survey of all smartphones available to me and the 3125, SDA and xv6700 (PocketPC, not Smartphone, i know) all do this. Only the Blackjack behaves like the Dash, and at least in my mind, properly.
Now that we have a syntax for defining state aware objects, we need the code that wires these things up for us automatically. This consists of the following:
This attribute simply marks the method as a state aware method. As previously illustrated, there is a lot of manual footwork to get all the plumbing in place for a method marked as such. I'm still trying to figure out whether I can simplify that some more and let the baseclass do the work. The other option is a code generator that spits out the appropriate block into separate file as a partial class definition.
The method delegate argument that the constructor takes is the delegate that the method calls to get its work done.
namespace Droog.Stately
{
[AttributeUsage(AttributeTargets.Method)]
public class StateMethod : Attribute
{
string methodDelegateName;
public StateMethod(string methodDelegateName)
{
this.methodDelegateName = methodDelegateName;
}
public string MethodDelegateName
{
get { return methodDelegateName; }
}
}
}
This attribute tags methods as handlers for state methods. As such they must have the same signature (although they can be public, private, protected, whatever) as the actual method, or more to the point, the same signature as the delegate that the state method calls. It's conceivable that the state method adds another argument, although i can't really see why.
There are two types of StateMethodHandlers: Default and state specific.
namespace Droog.Stately
{
[AttributeUsage(AttributeTargets.Method)]
public class StateMethodHandler : Attribute
{
string methodName;
byte state;
bool isDefault;
public StateMethodHandler(string methodName, byte state)
{
this.methodName = methodName;
this.state = state;
this.isDefault = false;
}
public StateMethodHandler(string methodName)
{
this.methodName = methodName;
this.isDefault = true;
}
public bool IsDefault
{
get { return isDefault; }
}
public string MethodName
{
get { return methodName; }
}
public byte State
{
get { return state; }
}
}
}
This is a very simplistic implementation. Note that it assumes that the derived class does everything right. I.e. if you forgot the default handler it would die unceremonously, if you had multiple handlers declared for the same things, it would just take the last one, etc.
It also handles only Methods. It should probably be extended to at least properties. But extending it to event handlers could be pretty powerful as well.
The mechanism tries to do as much as possible at first instantiation to both reduce the overhead and allow it to fail early instead of at call time, should the definition be wrong. If performance was a problem, the creation of the delegates could be moved to instantiation as well, and done at each instantiation. It would create an increased overhead for instantiation and a larger memory footprint, so it's a modification that needs to be dictated by use.
namespace Droog.Stately
{
public class AStateObject
{
/// <summary>
/// Internal collection for storing the mappings for state delegation.
/// </summary>
private class StateDelegateMap
{
MethodInfo[] stateDelegateMethodInfo = new MethodInfo[10];
MethodInfo invokingMethodInfo;
FieldInfo delegateFieldInfo;
MethodInfo delegateMethodInfo;
public MethodInfo this[byte idx]
{
get { CheckBounds(idx); return stateDelegateMethodInfo[idx]; }
set { CheckBounds(idx); stateDelegateMethodInfo[idx] = value; }
}
private void CheckBounds(byte idx)
{
if (idx >= stateDelegateMethodInfo.Length)
{
MethodInfo[] newStorage = new MethodInfo[idx + 1];
stateDelegateMethodInfo.CopyTo(newStorage, 0);
stateDelegateMethodInfo = newStorage;
}
}
public MethodInfo InvokingMethodInfo
{
get { return invokingMethodInfo; }
set { invokingMethodInfo = value; }
}
public FieldInfo DelegateFieldInfo
{
get { return delegateFieldInfo; }
set { delegateFieldInfo = value; }
}
public MethodInfo DefaultDelegateMethodInfo
{
get { return delegateMethodInfo; }
set { delegateMethodInfo = value; }
}
}
/// <summary>
/// Internal Dictionary for mapping methods to delegation maps.
/// This is really just a shorthand for a typed dictionary
/// </summary>
private class StateMethodMap
: Dictionary<string, StateDelegateMap>
{
}
/// <summary>
/// Static storage for our mappings. Base class stores them all
/// and lazy-initialized them the first time an instance of
/// a Type is created
/// </summary>
static Dictionary<Type, StateMethodMap> stateDelegationMap
= new Dictionary<Type, StateMethodMap>();
protected byte internalStateId;
protected AStateObject()
{
InitStateHandlers();
}
/// <summary>
/// The internal representation of states as bytes. It's up
/// to the deriving class to decide whether an Accessor should
/// be exposed and that datatype (recommeneded enum) is used
/// to represent the class's states.
/// </summary>
protected byte InternalStateId
{
get { return internalStateId; }
set
{
internalStateId = value;
InitState();
}
}
/// <summary>
/// This creates the static mapping of state methods to possible
/// delegates. It does not create delegates because they are tied
/// to instances and therefore have to be created for each
/// instance. This creation happens in <see cref="InitState"/>
/// </summary>
private void InitStateHandlers()
{
Type t = GetType();
if (stateDelegationMap.ContainsKey(t))
{
// we already generated this map, so we can skip this step
return;
}
MethodInfo[] methods = t.GetMethods(
BindingFlags.Instance |
BindingFlags.NonPublic |
BindingFlags.Public);
StateMethodMap methodMap = new StateMethodMap();
stateDelegationMap.Add(t, methodMap);
// find all state methods for this type
foreach (MethodInfo m in methods)
{
foreach (StateMethod attr
in m.GetCustomAttributes(typeof(StateMethod), true))
{
FieldInfo delegateField = t.GetField(
attr.MethodDelegateName,
BindingFlags.Instance |
BindingFlags.NonPublic);
StateDelegateMap delegateMap = new StateDelegateMap();
delegateMap.InvokingMethodInfo = m;
delegateMap.DelegateFieldInfo = delegateField;
methodMap.Add(m.Name, delegateMap);
}
}
// find all state delegates for this type
foreach (MethodInfo m in methods)
{
foreach (StateMethodHandler attr
in m.GetCustomAttributes(typeof(StateMethodHandler), true))
{
if (methodMap.ContainsKey(attr.MethodName))
{
StateDelegateMap delegateMap = methodMap[attr.MethodName];
if (attr.IsDefault)
{
// default method handler
delegateMap.DefaultDelegateMethodInfo = m;
}
else
{
// state specific method handler
delegateMap[attr.State] = m;
}
}
}
}
}
/// <summary>
/// This gets called every time we change the stateId and creates the
/// appropriate delegates from the static mapping
/// </summary>
private void InitState()
{
StateMethodMap methodMap = stateDelegationMap[GetType()];
foreach (StateDelegateMap delegateMap in methodMap.Values)
{
MethodInfo delegateMethodInfo = null;
if (delegateMap[InternalStateId] != null)
{
// got a state specific method, let's map it
delegateMethodInfo = delegateMap[InternalStateId];
}
else
{
// no state specific method, use the default
delegateMethodInfo = delegateMap.DefaultDelegateMethodInfo;
}
Delegate stateDelegate = Delegate.CreateDelegate(
delegateMap.DelegateFieldInfo.FieldType,
this,
delegateMethodInfo);
delegateMap.DelegateFieldInfo.SetValue(this, stateDelegate);
}
}
}
}
Now we have a fairly simple framework to create objects in our simulation that can alter their behavior depending on their state. As I start writing code with this framework, it'll probably get fleshed out a bit more and I'll find out whether the design is sustainable, since right now it's a clean-room design more than anything else.
One thing I've always liked about UnrealScript was its inclusion of states as a first class citizen in the language. It's something that just made sense for the problem that UnrealScript tries to solve. After doing some Unreal modding, I thought about building a game scripting language of my own and contacted Tim Sweeney for his advice. I don't know if he just didn't get as much mail as he does now, or if he is just one of the incredibly helpful guys in the industry, but he got back to me in a day and gave me some good advice on language design and generally told me to bone up on the Java Virtual Machine design docs -- which is where he got a lot of his ideas for UnrealScript.
While fun, that project didn't get very far, but I still keep coming back to first-class stateful programming from time to time. Now that XNA is out seems like the perfect time to revisit this, because writing your game logic in .NET would almost certainly benefit from the approach of UnrealScript.
There are three paths of approach to this problem that I think are all viable:
Implement an UnrealScript compiler for .NET
This would be a cool project and could be something to try to get Epic to pick up and have them embedd .NET/mono into Unreal. It would provides their clients with more programatic flexibility (i.e. more languages), a richer framework and might even add a good bit of performance (not sure how optimized the UnrealScript VM is by now). It's basically what SecondLife is up to.. Of course LSL was always a scripting language, and UnrealScript has been a bytecode compiled language from the start, so the comparison might be flawed.
Create a C# derivative that implements new state keywords
This is my fav, just because I prefer to have the C# as my foundation. While UnrealScript is probably one of the nicest game scripting languages for gaming, its syntax is still a bit strange to me at times -- like state() vs. state keywords and the lack of static methods. But it would be a major undertaking, so it's likely to be the ultimate, not initial, approach.
See what can be done in C# to create a stateful object framework
This is the path I've chosen for now, because it let me prototype the whole thing in a day, which is always nice. If that turns out to be useful, the prior approach can always be adopted from this work.
So what are my design goals for this stateful C# framework?
Different logic paths per state on state aware methods
Really the definition of state aware objects. If the state of my Bot is Idle a different code path for Hit() should be called than if the Bot was in state Attacking.
State specific methods should be inherited
Extend regular inheritance to per state logic path inheritance. I.e. a subclass should be able to override the existing default and state specific methods, as well as create state specific methods, where the base class was just using the default.
Default and state specific methods
A method that is declared as state aware should always have a default implementation, so that identical behaviors don't all create duplicate code or simple stubs that just point to common implementations.
Type-safe states
I always prefer strongly-type definitions, so enum's seem like the ideal way to express states. The biggest drawback is lack of inheritance in enums. If we used one enum for all states of all objects the state names would quickly become meaningless. So the design must allow for per class enumeration of states and the framework should not have a preconceived idea of the possible states.
Serializable state-aware objects
If our objects store state, we should be able to fully serialize them, including their state, so we can take a snapshot of our action at any point in time.