How timely.. Using the xv6700 for net access
Engadget has a how-to on using a EV-DO device for internet access for a laptop. They use the xv6700 in their tutorial, so i need to try that out
Engadget has a how-to on using a EV-DO device for internet access for a laptop. They use the xv6700 in their tutorial, so i need to try that out
As it turns out, my phone was not developer locked as I had assumed from some of the sites I found. It was mostly a matter of getting VS.NET 2k5 to install its code on the phone. Not that that went without problems, and I gather I still need to figure out how rapiconfig enters into this.
I created a demo on VS.NET 2k5, deployed it to the phone and it failed without any useful error. However the program was over there, so I ran it manually. Problem with my version of .NET Compact Framework, i.e. i only had 1.1 and needed 2.0 for what Visual Studio built. I searched for a while how to target 1.1 from VS.NET 2k5 and found references claiming that I could choose at project creation. However I found no such option when I tried a new project. I think have a release candidate of 2k5 on my home machine, so maybe it's supposed to be there and just not in my version. I will check on that tomorrow with my work machine, which has the official release on it.
So i fired up VS.NET 2k3. This one wouldn't connect to my phone. Played around with the settings for a while, but no go. So I hand deployed the build and voila, first test running on the phone.
Good. But I don't want to target 1.1 or use VS.NET 2k3, if i can avoid it. Can I upgrade my phone? I downloaded the .NET CF 2.o redistributable. It didn't want to install. It complained about another version being on my machine.. Grrr.. I don't care what my machine has, I want it on my phone! But I uninstalled the CF 2.0 I had, then re-installed the new one and it promptly upgraded my phone.
After the phone reboot, the phone no longer came up on my machine when i plugged it in. But that was just an artifact of the install, a rebbot fixed that. In the meantime, I fired up the binary I had previously built for 2.0 and now it worked as well. We've got a platform to play with!
I got an xv6700 last week, trying to rid myself of the extra bulk of my palmpilot. The xv6700 is not too large, but has a nice large screen and a usefully sized keyboard. Plus its a full Windows CE 5.0 device, so I'll start writing apps for it as soon as i figure out how to unlock it for development.
In the meantime, my primary concern has been calendaring. That's been a pet peeve of mine for a while. I want to be able to setup meetings with people outside the boundaries of my personal calendaring program, but all the solutions used to be isolated to their servers. With iCal becoming widely accepted that's finally gone away. I can invite people that have an iCal product (zimbra, google, etc.) from the exchange server at work and vice versa and everything is happy.
However syncing to mobile still sucked, so my palm pilot and my online calendars were constantly out of sync. Switching to the xv6700 I hope to change that.
Now if I ran Exchange for personal use, all problems would be solved, but I don't. I'm currently playing with zimbra for personal use and there is still a use case that fails:
However inviting someone from exchange that uses zimbra works. So Mobile Outlook uses a different calendar file format, it would seem.. Annoying. However that still means that the only time I'm out of sync is when I create a new event on the phone. I can live with that for now.
Here's what google sees from the different type of invites:
Invite from Zimbra
Content-Type: text/calendar; name=meeting.ics; method=REQUEST; charset=utf-8
Content-Transfer-Encoding: 7bit
Invite from Exchange
Content-class: urn:content-classes:calendarmessage
Content-Type: text/calendar;
name="meeting.ics";
method=REQUEST
Content-Transfer-Encoding: 8bit
Invite from xv6700
Content-Type: application/ms-tnef
Content-Transfer-Encoding: base64
X-MS-Has-Attach:
X-MS-TNEF-Correlator: 221hvc0sdpi5
Content-Disposition: attachment
It would appear that at the very least, Mobile outlook doesn't set up the invite as an .ics, but instead uses MAPI and sends it as a TNEF file. So Exchange will send things out in a portable format, but Mobile Outlook won't. Pity. Can't find any options to change that behavior either.
Former High Geek of MP3, Sander van Zoest, just posted a video from the night we unleashed the bomb. Who would have thought that we'd get sued for 80 billion dollars. No seriously, those were damages sought based on $100,000 per song ripped by the bomb.
Hey, it was a worthwhile attempt to try to force record companies into licensing content in a consumer friendly manner. The most intersting lesson learned from the legal aftermath was that a) the record companies couldn't actually license us the CDs if they wanted to, so convoluted was the right holder business and the issues on whether this was mechanical or performance, and b) that the record companies don't actually know what they hold copyrights for, in detail. What a record keeping nightmare.
Sure our ploy didn't make it, but it did pave the road for licensing and pushed Sony/UMG to create a venture for legal music called Duet ne Pressplay, or as it is now called, Napster.
As I mentioned in my update to my last post, the custom URLs break down on postback because the form doesn't realize where it's supposed to post back to. To get around this, we need two new classes, a new Page base class and a custom HtmlTextWriter:
public class RewriteBaseClass: System.Web.UI.Page
{
protected override void Render(HtmlTextWriter writer)
{
CustomActionHtmlTextWriter customWriter = new CustomActionHtmlTextWriter(writer);
base.Render(customWriter);
}
}
and
public class CustomActionHtmlTextWriter : HtmlTextWriter
{
public CustomActionHtmlTextWriter(HtmlTextWriter baseWriter) : base(baseWriter)
{
}
public override void WriteAttribute(string name, string value, bool fEncode)
{
if( name == "action")
{
value = HttpContext.Current.Request.RawUrl;
}
base.WriteAttribute (name, value, fEncode);
}
}
This seems to work, although I'm not yet convinced it's the best way or without side-effects. Need to play with it a bit more.
One thing I used to do in mod_perl under apache is use the <Location>
directive to feed my PerlHandlers instead of using extensions. Not only did that mean that my apps had nice URLs like
myapp
and the PathInfo /user/add
could be interpreted as arguments. Much nicer than
or
On the ASP.NET side, everything always seemed very file system based, giving it an almost CGI feel, even though under the hood it couldn't have been further from a CGI. Sure you could register your own extensions, but again, extensions and directories -- so filesystem.
I figured it must be possible to process request by hand and it turns out to be rather simple: In IIS just map * (not .*) to ASP.NET for your webapp and you can catch every request. And you don't have to give up on the existing ASPX, ASMX or ASHX infrastructure. Just use HttpContext.RewritePath(string path)
to process the incoming requests and send them off to your regular pages or webservices.
By default you loose the path info that you'd receive in the equivalent Apache Application_BeginRequest
code:
protected void Application_BeginRequest(Object sender, EventArgs e)
{
int start = Request.ApplicationPath.Length+1;
string path = Request.Path.Substring(start);
string[] info = path.Split(new char[] {'/'},2);
string handler = info[0];
string pathinfo = "/"+info[1];
string rewritePath = "~/"+handler+".aspx"+pathinfo;
this.Context.RewritePath(rewritePath);
}
This will take a URL such as foo.com/myapp/user/add
and call myapp.aspx
with PathInfo of /user/add
.
Update: Ok. So this doesn't work quite as well as I'd hoped, since roundtrips will change the URL around. While it doesn't error out, the URL gets ugly again and you have to do some clean-up of your path info. Basically on postback foo.com/myapp/user/add
becomes foo.com/myapp/user/add/myapp.aspx
.
So somehow the action of the default form needs to be changed (not legal by default). Overriding the renderer or using Javascript seem like options. I'll post again when I have a working, non-hideous solution.
I've been a dedicated TiVo fanatic for 6 years now. The concept of consulting a TV guide and arranging my daily life so i can sit down in front of the TV in accordance with somebodies idea of appropriate scheduling is one I can no longer accept. I'd rather not watch TV than watch it on someone else's schedule.
At the same time, the fear of being an evangelist for technology is always "will my technology be Betamax?" I.e. will its implementational superiority finally mean nothing in the face of someone else's mass market delivery?
And because of TiVo, I have become somewhat of a HiFi Luddite. Sure I own an HDTV, but i take crappy lodef cable and then stretch and crop it to fit in 16:9. And 5.1 sound? Bah, stereo is good enough for me. If my TiVo can't do it, I don't want it. I don't care if the gore in CSI is so much crisper in HD, if I can't record it and watch it with Pause button in hand, it's not worth it. I've been tempted by the Timewarner DVR a number of times and prefer the concept of renting a DVR over buying and paying a monthly fee. But while the non-TiVo products are DVRs, it's the little things in the way TiVo behaves that make it hard to give up. I know plenty of people who have the HD DVR for HD, but still use the TiVo for everything else.
So why no HD TiVo? Or why not build my own, being the card-carrying geek that I am. Well, Ars Technica has a very nice article why we are stuck in a consumer hell where all the cool new toys don't really do what you'd expect them to do as a consumer.
Sure, I hope that I'll have the opportunity to buy and use a Series3, but I'm not holding my breath. If the leadtime of hearing about HD HTPC that's worthwhile is as long as the time between seeing the first HDTV and buying one (and not watching HD on it), then I got some time to go still. Truly, I still think Cablecard is about as likely to hit mass market as SDMI
Just after starting to play with closures in javascript (to fake delegates), i run across an excellent series of articles on closures and anonymous functions in C# 2.0, complete with pitfalls. Cool stuff...
The implementation of anonymous methods in C# and its consequences
I've recently been doing javascript coding again. Being the object bigot that I am, everything that interacts with a DOM element gets wrapped in an object that becomes responsible for that element's behavior. Well, then i tried to move all the event handler code from the procedural side to the object side and things broke, and hard.
At first I was confused why it wouldn't call my this._InternalMethod inside my event handler. Then I remembered that i've been spoiled by the CLR and that I was dealing with plain old function pointers, not delegates.
While the Atlas framework provides delegate functionality (along with a lot of other useful things), this was not for a .NET 2.0 project and I didn't want to graft the Atlas clientside onto it as a dependency. But knowing that Atlas does delegates, i knew it was possible.. but how?
I found the answer in this article which basically uses closures in javascript to allow the persistent of the object context in event handlers.
So basically to create an event handler that maintains its object context do this:
function MyObject = function(name)
{
this._name = name;
var _this = this;
this.MyEventHandler = function()
{
alert("My name is "+_this._name;
}
}
Great. Now I can avoid all procedural code and just have my object subscribe themselves to element and document events and handle them in their own context
When we got a Humax Series 2 TiVo (to replace our faithful Series 1) it almost immediately started making clicking noises that usually signify the imminent death of disk. We figured, if we sent it in, they'd just see it working and send it back. So we started the TiVo deathwatch. 2 years later it still hadn't died, but now, about once a week, the clicking is followed by the machine rebooting. The warranty expired a while back, so time to replace the the disk.
Being a Series 2, I decided to use a disk larger than 137GB. Quick trip to Fry's and I had a shiny 200GB disk. Now, it's been a while since I put the 120GB in the trusty Series 1 (which is still running strong and better than the Series 2), so my other PCs have gone through some changes. Turns out that all my new machines use SATA and have only one ATA port. And the DVD is hooked to that port, which leaves only one more available connection.
Right, you need at least 3 -- CD, source HD, target HD. Ok, no problem, I have an old box that's the home file server. That one not only has two ATA, but it also has a PCI card with another 2 ATAs (for large disk support). A couple of minutes later, I booted into the weaknees CD and all seemed fine. The boot messages showed all disks properly set up. But apparently the setup that weaknees used did not have any devices above /dev/hdd
. So my disks hooked to /dev/hdf
and /dev/hdh
were not accessible. And i couldn't use the /dev/hd(a)-(d)
because my disk was 200GB and was not recognized as such with the onboard ports.
Fortunately, the server itself runs Fedora, so i copied the mfs tools from the CD onto the HD and booted into Fedora with the tivo disks attached. Everything worked out fine and now i've got a potential 219 hours on tap!
But I guess the lesson is that ATA is going away, so what I used to take for granted is no longer there when the tivo tweaking calls.