Plowing the Seas

Indeed, one of my major complaints about the computer field is that whereas Newton could say, "If I have seen a little farther than others, it is because I have stood on the shoulders of giants," I am forced to say, "Today we stand on each other's feet." Perhaps the central problem we face in all of computer science is how we are to get to the situation where we build on top of the work of others rather than redoing so much of it in a trivially different way. Science is supposed to be cumulative, not almost endless duplication of the same kind of things.


Richard Hamming, 1969

On: On Go

Day one: Read this. Read every word, then read the comments. Then sit there for awhile and think it's all rubbish. Maybe even read it again, sleep on it-- whatever you have to do (I'm describing my usual ponderous way of long-suffering-thinking (a term I coined just now)).

Day two: read all of the links in his references section. All of them. Look up the author and read some of his other posts. Maybe this beautiful one on Ada.

Day three: realize that you know absolutely nothing.

Reading Roundup

Citation Needed : On Why Array Indices Start With 0, by Mike Hoye

Struct packing? Pointer arithmetic? Turns out array indices started at 0 long before C even existed, and unless you guessed that this decision was guided by yacht racing in the 50s and 60s, you'd be dead wrong. This article is incredible not because the question has finally been answered, but because it describes the humble beginnings of software engineering. Computers were different in those days...

The Rise of Worse is Better, by Richard P. Gabriel

This is one of those famous papers by a genuine, old school computer scientist. The kind that probably thinks on a whole different level than you or I. This paper is incredibly interesting, not because Gabriel presents the theoretical design considerations of the most academic family of all languages, Lisp, but because he describes why Lisps have never been a valid solution in the real world.


Satoshi Nakamoto, Time Traveler

Imagine for a moment, that in the distant future, the first time traveler makes his (or her) first voyage. He's been working in his garage for months, maybe years on whatever interesting things he can think up. There are all sorts of homemade robots and automated devices that scurry about his house. He is frequently seen taking walks-- he's a social person generally, except at night when he says goodnight to his friends and disappears into his workshop. Then one day, he's gone. And he never comes back. He's transported himself to a time and a place where things are not quite what he thought and he's stranded there to live out his days. Whoops.

Imagine the first country that makes the first trip in time and arrives safely back in the present. Perhaps there are a few moments of overlap (probably floating point errors) where the world wonders at the two identical men standing beside the machine. The world would take notice.

Eventually it would become cheap enough that large corporations could devise time machines, or perhaps there would be a sort of time war-- but whatever the catalyst (perhaps one day all the world's bananas suddenly disappeared), eventually time travel would be regulated and policed. People wouldn't be allowed to just travel about the timeline willy-nilly.

So why have we never seen someone from the future? Well my guess is that we have. But as I said before, there are strict rules about that kind of thing in the future. If someone had come back in time, either they are a researcher with strict instructions to never interfere with the people local to that time, or they are a criminal, in which case the time police would catch them and simply rearrange the timeline such that the event never occurred in the first place. That's why you're sitting there and I'm sitting here and we both doubt that time travel will ever be a reality.

Now, we've all thought about what we would do if we could go back in time. The mind's first jump is obviously to ride a dinosaur but the gravity and irrelevance of such an action kicks in and you devise a new plan. Just about every time travel movie has the same notion: you play the stock market. You take today's paper and you go back in time (just one week will do) and you clean up. There's no real limit to how many times you can do it-- heck maybe that's been Warren Buffet's plan all along. But I doubt it.

The problem is, the time police know all about this scheme. There are all sorts of ways they monitor this stuff-- it's as easy to watch the time dials as it is to watch a cyberhacker reset the balance on his library card. But that would be today's equivalent of marching into a bank with a gun and demanding all the money from the vault; except with the time police, they can wipe away the fact that it ever happened in the first place.

This is where our friend Satoshi Nakamoto comes in.

Nakamoto is most probably a time cop. He sits at his desk fiddling with numbers and computers to track down the perps. Then he goes out into the field and busts some skulls. Nakamoto is good. He's great at his job but he can never seem to climb the ladder. Or perhaps, just as curiously, he just doesn't want to... Why does he, an old timer, still sit with the rookies? Why doesn't he ever accept the captain's invitation to grab a beer after work? It's because Nakamoto has an end game.

As a civil servant, Nakamoto probably makes next to nothing, and what's worse-- his accomplishments have all been erased. Imagine spending months or even years tracking down fugitives through monumental futuristic cites bathed in neon, through Old Jerusalem when it was new, through the open plains of The New World when Mother Nature still owned the land: and when you finally catch them, the entire event never happened. It's a tough gig.

Nakamoto wanted out and he knew how to do it. He watched criminal after criminal screw the pooch. They made absolutely every mistake possible. See, the problem is: they are always too loud. After all, that's the only way to make money. How can you get paid otherwise? That's why Nakamoto has opted for an entirely new strategy.

Think of how brilliant it is! And it all makes sense! Nakamoto has amassed what-- $400 million so far? With endless opportunities to invest, to buy, to sell. In our lifetimes he may have the power to crush kingdoms and dictate laws all through a veil of mathematical anonymity.

Nakamoto, the time traveler, will never be identified-- not ever. Because if we knew who he was, none of this would have happened.

bo-X Behind the Scenes: Part I - Overview

Introducing bo-X

Last year I dedicated quite a bit of time to writing a 2D rendering engine on top of WebGL. I call it: bo-X. I'll admit, most of the time was spent internally applauding myself for the excellent name. It's pronounced "bo-chi" (that last letter is a Greek Chi by the way), one-upping the incredibly arrogant name of the famous typesetting platform LateX.

But enough about how awesome the name is-- how awesome is bo-X? Well, let me just preface with this: it won't run on most browsers-- no idea why. Does that answer the question for you?

What have I learned?

bo-X is a modest little project, started to further my knowledge of OpenGL and 2D engines. There are some interesting problems and I think I have come up with some interesting solutions. But, perhaps most interestingly of all, is that I discovered just how much the right tools matter. I could write volumes just on that one idea, but I'll sum it up here in a single adage: 'a poor craftsman blames his tools' is just plain wrong.


Workflow is so dang important. Streamlining every single piece of the process from your first keystroke to your last is essential. I have always known this, but many times I find myself working around the same workflow issues over and over again. Working on bo-X mostly on the bus to and from work has renewed my vigor for a flawless, streamlined workflow, because I literally have 30 minutes to get into the project, make a useful contribution, test it, and get out.

I'm not just waxing on, I swear!


Obviously, the build process is a huge part of the workflow. Particularly in JavaScript, where you generally just throw spaghetti at a wall and hope things stick (in the right order too), the build process needs to be fast and helpful.

Ant is kindof a no brainer, but I made sure to have Ant up and running as soon as possible. I had several targets:

  • Clean - deletes the build target directory.
  • Validate - Runs JSHint on all my unminified JS.
  • Concatenate - Concatenated all my unminified JS files, in the correct order. This was broken out into separate concatenated files: dependencies, editor and core.
  • Minify - Minifies the concatenated JS.
  • Generate Docs - Generates all jsdocs from source.

There's probably nothing here you haven't seen before, but that's because you've already learned that these steps are important. If you don't have a process similar to this one, get it done! It will save you oodles of time down the road. In addition, I had targets to copy sources and concatenated files about. Being able to specify dev or prod is essential as it lets you skip some of the targets and speed up the build-- iteration speed my friends!


A long time ago I used to use Aptana for any and all web development. Sometime last year, however, I tried out a trial of PHPStorm and holy crap-- I will never use anything else for web development again. I could tell you about all sorts of nifty little things PHPStorm does for you, but I'll only focus on two:

  1. Automatic Upload. I was able to configure PHPStorm's automatic upload so that every time I saved a file, the sources were uploaded to the correct remote directories on Turns out, this is really amazing. I found myself testing mostly on a real environment simply because it was just as easy as running locally. In fact, due to cross origin policy crap, it was easier than running locally.
  2. JS Test Driver. PHPStorm's builtin test driver blows away just about any unit testing tools I've ever used on any platform. For a large portion of the project, I really got into actual test driven development because it was a snap to run unit tests instantly in Firefox, Chrome, IE, and Opera simultaneously. This is a key missing piece when working in Unity or even Flash. I have used both ASUnit and NUnit, but there is not a fast, efficient way to run tests and collect results, especially across targets. I would love to be proven wrong about Unity, but in my experience, the amount of friction involved with unit testing Unity applications is great enough to reconsider using unit tests at all.


One thing I have never done well is document my projects. Because of this, one of my goals for bo-X has been excellent documentation. I decided upon a two-pronged approach: a combination of JSDoc and Markdown.

I wanted a marriage between code and comments, i.e. I wanted the source to contain as much of the documentation as possible. Otherwise, you're going to spend a lot of time trying to keep the documentation in sync with the source code. By adding copious comments in source via JSDoc, it is much easier to change them both at the same time. Plus, with Ant, generating new docs is a snap.

As for Markdown-- well there's not that much to explain there. Markdown is awesome and it allows me to use GitHub to serve up documentation. The Markdown files are mostly for explanations of entire systems or examples of how to use different objects. They provide high level looks at bo-X.

Fun and Worthless

The last thing I learned was that bo-X was fun to write but dang is it completely useless. I came up with some cool ideas and I learned a lot about OSS (I swear I'll try to never write a douchy bug report for an OSS project again), and that may have to be enough this time. I think that's just how things are: I've got to create for my own sanity.

Check it out on GitHub. Maybe someday I'll get it working in more than one browser... ;)

SuperColliders in Unity

I was fooling around with collision a few weeks ago so I thought I'd write about it.


The Separating Axes Theorem is a simple theorem concerning the collision of convex polyhedra. Basically, you take two polyhedra and create an axis perpendicular to every face of both polyhedra. For two cubes you would have 12 axes, one perpendicular to each face of both cubes. However in the case of a single cube, you can see that there are really only 3 unique axes, as the other 3 are duplicate. That is a very useful optimization...

Once you have all the axes, iterate over each axis and project both polyhedra onto the axis. The two polyhedra intersect if and only if the projections overlap for each axis. What's great about this is that if you have been summing the vector overlaps, you end up with a resolution vector, i.e. a vector that will push the objects apart!

The algorithm is fairly straightforward and it's also fairly cheap, as the full number of axis projections only need to be computed for intersecting polyhedra. For every other polyhedra, the theorem will let us exit as soon as there is an axis found with no overlapping projections.


Unity comes bundled with a powerful physics engine-- so why would you ever use this technique? PhysX is great, but it's also overkill for many applications. If you just need collisions, not resolutions, then SAT may be cheaper. This is the case much of the time, like when you fire a weapon and when it hits something you want it to explode. There's no collision resolution needed, just the point of impact.

For my particular application, I had thousands of cubes and projectiles firing through them. If I had a Collider running on every cube, I would drain the planet of all natural resources. Instead, I partitioned the space into a hash, then did narrow phase collision detection with SAT. I was able to scale up to 50k cubes at a solid 60 fps with dozens of projectiles flying through them.


I have extracted a simple demo and stuck it on GitHub.

Hit play, then move the objects around in the scene view. Turn on gizmos to see the axis projections.

Separating Axes Theorem Demo

It may be difficult to tell without playing around with it but each axis is drawn from the same origin. The light green axes are orthogonal to the faces of the small cube and the light red axes are those of the large cube. The solid green line segments are the projections of the small cube onto each of the axes.

Separating Axes Theorem Demo IntersectionWhen the two objects intersect, they both turn red, and as you can see from the screenshots, scale and rotation are both easily accounted for.

 Further Thoughts

There is obviously a lot of room for improvement. While I created an interface that will work for any convex polygon, the more useful strategy is tooling that allows you to create convex collision shapes. For instance, creating a set of collision shapes from any mesh, or more wrappers for other primitives.

The interface also has a few shortcomings of its own. I would like to be able to use this same technique for "roundy" shapes, like a circle. SAT can be used for these, but the axes need to be created dynamically to point at the center of the circle. You will see what I'm talking about if you take a look at the article I linked to at the beginning.

Maybe my next iteration will include some of these generalizations.

Ninject and Unity

The Background

I have searched high and low for a good dependency injection solution that works well with Unity, even when AOT'd. Unfortunately, even my most brilliantly worded Google queries have rendered fruitless results. I'm a little bit ashamed to admit this, but I even tried 'Binging' it.

A host of full blown, .NET DI frameworks won't compile against .NET 2.0 and the remaining libraries usually JIT, meaning they won't work when compiled to crummy old, JITless iOS. What's more, it seems that no one in Unity land has figured out how to use MonoBehaviours in conjunction with dependency injection.


When configuring Ninject, there's a simple flag that controls how the injection mechanism works. Simply tell Ninject to use reflection based injection, rather than JIT.

_kernel = new StandardKernel(
	new NinjectSettings
		UseReflectionBasedInjection = true,
		LoadExtensions = false
	new CoreModule());

The LoadExtensions setting is also crucial, as it seems that the way Ninject loads in extensions is also illegal.

Working With MonoBehaviours

I'm not going to go over how to use Ninject, but I will go over my solution for using injection in tandem with GameObjects and Monobehaviours. There are two real possibilities: either you want to inject into a MonoBehaviour, or you want to inject a MonoBehaviour into something else. As a solution to the first, I created a simple class called InjectableMonoBehaviour and call inject from Awake: the earliest possible time I have control after MonoBehaviour construction.

protected virtual void Awake() {

In Main.cs, I'm doing something equally non-fancy:

public static void InjectScript(MonoBehaviour script) {
	if (null == _main)
		GameObject main = GameObject.FindGameObjectWithTag("MainCamera");
		_main = main.GetComponent();

With InjectableMonoBehavior, I can do incredible things. Check out this MonoBehaviour:

public class Touchable : InjectableMonoBehaviour
	private Bounds _bounds;
	public InputController InputController {
	// elided

Now my instance of Touchable magically has a reference to an InputController. Brilliant.


I've covered injecting into MonoBehaviours, so now I'll cover the more tricksy, injecting MonoBehaviours into other objects.

HierarchyResolver<T> is a subclass of Provider<T>, which is essentially a factory class provided by Ninject. Overriding Provider<T> allows you to specify a specific implementation for the injection of a particular type. This allows me to do a few cool things.

With HierarchyResolver, you can place a MonoBehaviour in the scene, and inject that specific instance through Ninject's usual injection mechanisms. For instance, say I place a CameraController on the scene's main camera. If I want to inject that via Ninject, I just have to configure a HierarchyResolver<CameraController> in whatever module you wish:

Bind().ToProvider(new HierarchyResolver("MainCamera"));

This is binding a Provider<CameraController> to the Ninject module. Additionally, HierarchyResolver can take a tag as a constructor argument. The tag is where a recursive search for the dependency is started. I can leave off the tag if I want to, but using tags will always be more straightforward and performant. In this case, I've tagged the scene's camera with the "MainCamera" tag, so the resolver finds the dependency immediately.


I've done one more really cool thing, I think.

The above method of injecting MonoBehaviours into other objects is very useful, but has some shortcomings. Most notably, this method can only resolve a single instance of an object. What if you want to get a little fancier?

I've created the attribute InjectFromHierarchy to remedy this.

InjectFromHierarchy extends Ninject's InjectAttribute, but provides a tag and a query string.

[InjectFromHierarchy("HUD", "Readout.Star1")]
public Star LeftStar
[InjectFromHierarchy("HUD", "Readout.Star2")]
public Star RightStar

In this example, I have a GameObject tagged "HUD". Somewhere down in the children (at any level), there is a GameObject named "Readout" with two Star`s on it: "Star1" and "Star2". This will grab references to both of those objects and inject them into this class.


What's really cool is that the query string you use is not limited by the conventions of Unity's Transform::Find. In Unity's method, you specify direct children with a period. So "HUD.Readout.Star" represents a direct parent-child relationship, HUD->Readout->Star. With this query string, the periods are recursive. So the same query means HUD->...->Readout->...->Star.

I would like to update this in the future to be more similar to boX's scene graph query language, but haven't had the time as of yet. Also I need to blog about boX sometime...

Get it on GitHub!