Skip to content

Iloggable

Vibing Voxxon - A Zaxxon inspired isometric voxel space shooter

img.png

I grew up on 80s arcade shooters. Asteroids, Space Invaders, Galaxian, Scramble, Defender and others all consumed an inordinate number of quarters, basically whatever allowance I had. Of all of them the one that left the most lasting impression was Zaxxon. The 1982 Sega arcade cabinet — and later its ports to whatever home computer you could convince your parents to buy — was unlike anything else at the time. An isometric scrolling shooter where you had to manage altitude to fly through gaps in walls, dodge missiles, and bomb ground targets. The shadow beneath your ship was your only depth cue. It was brutal, beautiful, and unique for its era. Mind you I always sucked at it, but I could watch the attract screen all day.

So when I was looking for a bite-sized project to try Claude Code on, Zaxxon immediately came to mind. It was something I could easily do myself, which gave me confidence that I would be in a good position to judge the output that Claude produced.

The result is Voxxon — a browser-based isometric scroller in Three.js with voxel explosions, a leaderboard, a boss battle, and more features than I originally planned.

Mastodon Integration As A Function

First post in almost two years and what is it about? Refactoring my mastodon comment system yet again, this time turning it into a Digital Ocean Function. I know, I know, wasn't it essentially a function with AWS Lambda originally? Yes, it was, but I converted it to ECS because of the NAT and elastic IP requirement.

That limitation, it turns out, does not apply to DO Functions. They have public internet access out of the box at no extra cost. So it's more economical than either my AWS Lambda or AWS ECS approaches.

Rust inspired Async Lock for C

Venturing back into C# for the first time in a long time. I'm only 5 minutes in and already felt compelled to create a new utility class.

Essentially I'm adding an ASP.NET Web API to a daemon process, which requires that the REST API can get exclusive access to some daemon in-memory state. Since I'm building everything around async/await, I went looking for the async equivalent of the traditional lock, only to not find one. From what I could gather, the recommendation is to build one around SemaphoreSlim. After using that explicitly, I was really uncomfortable with having to make sure that I a) released the semaphore and b) did it in a try/finally.

Mastodon Integration

Well, damn it. Here I am procrastinating on my archviz project with a server side project. I am an infrastructure engineer at heart, so anything server related will always pull at me harder than any other project. So right after writing the Future Work section in my last post, I started looking more closely at Daniel's approach to using mastodon for comments. The part that scratched my server-side geek itch was his chicken-egg problem forcing the workflow:

  1. create post
  2. create toot
  3. update post with toot id

I just want to publish my post, have a matching toot automatically be generated and have the original post be able to discover the corresponding toot_id. I want to keep my blog static and don't really want to add this into the build pipeline, so what I need is a web service that will lazy publish the toot and return the toot_id for a given URL.

Blogging with mkdocs

claassen.net was converted from static html and a wordpress site behind nginx on an EC2 instance to a static site generated by mkdocs hosted on S3.

The motivation

For the last ~10 years, I hosted everything on a single EC2 instance behind an ElasticIP. A couple of wordpress sites, some static sites, a couple of node apps, all through nginx. Deploy was git checkouts and some shell scripting. After having my server taken over once because of a wordpress vulnerability, I now ran all my wordpress with read-only files, so that felt decent. Just had to make them read/write for version and plugin updates. Mind you all this is very low traffic hobby sites.

But I really wanted to get to immutable deploys. Run these sites as separate docker containers I could more easily test locally and deploy was a replace. I converted one WP site to docker and figured out how to run it with ECS. Then last month I set out to decommission the EC2 and convert everything left.

My WP deploys were using the default WP container as a base, but claassen.net was structured with the blog as a sub-directory and a bunch of static site elsewhere. I considered reconfiguring the container to serve the existing hierarchy through its nginx, but in the 10 years of not posting, my tastes for authoring had moved distinctly from WYSIWYG to markdown and if I was going to get back to blogging, I wanted to go static. I briefly considered jekyll, but had been using MkDocs at work and really liked it, plus I am far more fluent in python than ruby. Some brief googling later, I found Material for MkDocs which comes out of the box with blogging support and with LoneKorean's WordPress Export to Markdown I was well on my way.

Deploying many sites in ECS using one ALB

My current go-to deployment strategy is AWS Elastic Container Service (ECS) using Fargate behind an Application Load Balancer (ALB). Each site is its own stateless docker container persisting dynamic data in RDS and/or S3. When I make a change, I build the new container, push it to ECR, create a new task revision and ECS deploys the site for me.

I've now set this up a couple of times and each time I struggle to recollect all the steps along the way, so it's high time I write it down so that I can look it up next time. And now that I understand this a bit better, I was also able to consolidate my infrastructure, since my original approach wasn't necessarily the most cost-efficient setup.

Aside from remembering/reverse engineering all the pieces needed, the part I always got stuck on was the apparent catch-22 of a load balancer wanting a target group, a target group wanting an IP, while the ECS Service wants to set up a load balancer before providing said IP.

ECS RunTask needs public IP to access env file

Learning how to use ECS tasks to run some cron jobs has been an opaque journey to say the least.

I knew my TaskDefinition was fine, because my server was running, but I wanted to use that same definition with a different container command and schedule its execution for housekeeping tasks.

I started with creating an EventBridge schedule, which seemed straight forward enough. But as soon as I created it I was puzzled that there didn't seem to be a way to trigger it for testing nor was there any kind of information on whether or not the last invocation had happened and succeeded.

Closing out a decade of silence

About a decade ago I stopped posting on this blog. It wasn't a conscious decision but between twitter and stackoverflow, my attention had just shifted. Twitter provided an easy way to post tech comments, seductive in its immediacy and the enforced brevity. So I, as many other tech bloggers, led ourselves to believe that the interactions on twitter were a sufficient replacement of blog posts. As a way to acquiesce those niggling objections to that former delusion, stackoverflow served the need for asking and answering questions in a more substantive way.

Over time, the gamification appeal of stackoverflow wore off after a while, and I became mostly a consumer with the occasional issue that I got stuck on. It's still my ends up my primary source of troubleshooting, but mostly because google drops me off there. These days getting really stuck usually has me tracking down a relevant Discord community instead.

And twitter I left at the start of the pandemic, primarily because they kept messing with the ordered timeline and even when nothing was injected, it had really turned into primarily doom scrolling.

But last year I decided to give mastodon a try and have found it to be very much like early twitter. Informative, casual, non-sensational. Just a nice place to get some tech news, trends and conversation from.

However, as I play around with whatever evening project I've got myself into now, I am reminded of why I started this blog in the first place. Namely, capturing my troubleshooting and discovery so that next time I ran into the same or similar I would recall having written it down, or at least have google lead me back here again. So that's what I hope to accomplish here over the next year and beyond.

As part of the move, I've dropped wordpress in favor of static generation using material for mkdocs. I'll talk about how I ended up with that in a future post, assuming this New Year's resolution holds.

Josh.js 0.3 roadmap: rethinking I/O

My goal has been to expand Josh.Shell to be a better emulation of a console window. It seems simple enough: execute a command, print the output in a div. But print something longer than your shell window and you immediately see where the real thing has a couple more tricks up its sleeve:

cat foo.txt

If foo.txt is too large, you could use less:

less foo.txt

And now you pagination, plus a couple of other neat features.

Ok, let's implement that in Josh.Shell. Let's start with the less command. It could be hardcoded to know the div the content is displayed in, do some measuring and start paginating output. Aside from being ugly, you immediately run into another problem: In order to determine after what line you pause paginated output, how do you determine where one line begins and ends. Which only further exposes the problem of output in a div, sure the browser will wrap the text for you, but by delegating layout to the browser, you've lost all knowledge about the shape of the content.

To start taking back control over output we need the equivalent of TermCap, i.e. an abstraction of our terminal div that at least gives height and width in terms of characters. Next, we need change output to be just a string of characters with line feeds. This does lead us down a rabbit hole where we'll eventually need to figure out how to handle ANSI terminal colors, and other character markup, but for the time being, let's assume just plain-text ASCII.

Now we could implement a basic form of less. But chances are the first time you want to use less is a scenario such as this:

ls -lah /foo | less

i.e. we don't have a file that we want to display, we have the output of an existing command that we want to pipe into less. And this is where we run into our next problem: Josh.Readline has only one processor for the entire command line, i.e. the above will always be handled by the command handler attached to ls. And while we could make that command handler smart enough to understand | we'd have to do it for every command and then do the same for < and >.

Intercepting command and completion handling

No, what we need is a way to intercept readline processing before it goes to the command handler for either execution or completion, so that we can recognize command separators and handle each appropriately. It also means that the command no longer should return their output to the shell, but that the pre-processor executing multiple commands receives it and provide it as input for the next command.

The pre-processor work will go in Josh.Readline and can be discussed via Issue 16, while the pipping behavior will be implemented on top of the pre-processor work and discussion of it should happen on Issue 18.

Standard I/O

We certainly could just chain the callbacks, but we still have no way of providing input, and we'd end up being completely synchronous, i.e. one command would have to run to completion before its output could be piped to the next.

Rather than inventing some crazy custom callback scheme, what we are really looking at is just standard I/O. Rather than getting a callback to provide output, the command invocation should receive an environment, which provides input, output and error streams along with TermCap and a completion code callback. The input stream (stdin) can be only be read from while output (stdout) and error (stderr) can only be written to. As soon as the out streams are written to, the next receiver (command or shell) will be invoked with the output as its input. Stderr by default will invoke the shell regardless of what other commands are still in the pipeline.

All these changes are planned for 0.3, a minor revision bump, because it will likely introduce some breaking changes. I don't want to stop supporting the ability to just return HTML, so the stdio model might be something to opt in, leaving the current model in place. If you have feedback on the stdio and TermCap work, please add to the discussion in Issue 14.

One other pre-requisite for these changes is Issue 3. In a regular console, text followed by a backlash and a space and more text or quoting a string treats that sequence of characters as a single argument. Josh.Readline does not do this, causing some problems with completing and executing arguments with spaces in them and that will be even more of a problem once we support piping, so that needs to be fixed first.