Loyal followers! I'm not dead! In fact I've been working away on a few cool projects, but I'm not quite ready to divulge. Anyway, I thought I would post about a few different resources I've been reading through lately:

A Conversation with Anders Hejlsberg

You have to read every word. Seriously, stop reading this stupid blog and click that link. This is an incredible eight part interview with the creator of C# (and now, TypeScript). In this interview, Anders gives his vision of C#: what it is, why it exists, and why certain design choices were made.

# The Effective Engineer's Handbook

This article is another incredible read. I'm not sure how, but Akhamechet distills down years of engineering experience into a short article on what makes a good engineer. This is actually a pretty challenging read-- you might discover you're not so hot.

# The Nature of Lisp

Another great article by Akhamechet. It's old, but I rather enjoyed it. He attempts to explain Lisp by starting with XML and the popular Java build tool, Ant. From there, Akhamechet describes with clarity the problems that Lisp attempts to solve. It's a fascinating read, but I admit, I haven't yet written any Lisp.

# How to Write Portable WebGL

I'm not sure who Florian Boesch is (or how to pronounce his name), but he's got an excellent write-up on many of the obscurities of WebGL. Give it a read!

# JavaScript Patterns

I know, I know, more JS, right? As much as I want to hate it, it's pretty dang fun. This website describes in detail many of the more useful JavaScript patterns. I guarantee you wouldn't have thought of half of them.

# Quick Post: Easy Enums

On any engineering team, it's sometimes difficult to agree on best practices. Everyone has different ideas of good style, consistency, etc. But there's one thing we've reached consensus on: enums.

Here is what our best practices say an enum definition should look like (keep in mind, this is only for int valued enums):

public enum MyEnum { Unset = -1,   Default, ...   NumEnums }

There are a couple of cool things going on here.

First, enums are always initialized to 0, so there MUST be a 0-valued default enum. The rest of the definition comes from making enums easier to use as array indices. If the value is Unset, you know you can't index into an array with it, and NumEnums will handily give you the length of an array when cast to an int.

Here is where this shines:

for (int i = 0, len = (int) MyEnum.NumEnums; i < len; i++) { // do stuff }

Handy, eh?

# Quick Post: My JS File Template

I use PHPStorm for all my JavaScript needs. Dang it's good. Anyway, I've been working on a secret JS project I'll hopefully share one day, and one thing I've been working on is my JS file template. There's not a lot to it, but this is what mine looks like:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 /** * Author: thegoldenmule * Date: ${DATE} */ (function(global) { "use strict"; var${NAME} = function() { var scope = this;   return scope; };   ${NAME}.prototype = { constructor :${NAME} };   // export global.${NAME} =${NAME}; })(this);

Just a couple things: I like wrapping my whole definition in a closure. This is both a safeguard against my own accidental globals (yes, even thegoldenmule accidentally creates a global), and a safe space to stick "static" functions and objects, or even whole internal object definitions.

Inside the constructor function, I use the common pattern of defining a reference that points to this and returning that at the end.

The prototype is straightforward, as is the exports at the end. A cool caveat is that it's really easy to then export for Node.

# C# Musings...

I had a few ideas for C# constructs, so I thought I might as well write them down. Perhaps, some day, someone can tell me why these don't exist...

## Making Generics More Generic

Several times I've been frustrated by the fact that Func<TResult> has a bunch of variants. Wouldn't it be cool it you could specify a generic definition over an arbitrary number of types? This is what I mean:

Func<T, T1, T2, T3, TResult> could become Func<T1, ..., Tn, TResult> or perhaps, even better, Func<T[], TResult>. No, I don't know what the full implications of this type of syntax would be, but from a purely pragmatic perspective, this would be crazy useful.

## Extensions (and Partials) Should Fulfill Interfaces

Think about how cool this is.

I've run into the problem several times where I have to subclass or wrap an object I don't own so that it fulfills an interface I do own. Wouldn't it be cool if, instead, I could use extensions or partial classes to fulfill that interface?

## Constraints on Type Parameters Could Be Constraintier

Constraints are so cool, but I often feel limited by the constraints you can put on a type constructor. For instance, I can enforce a default constructor with new(), but I can't enforce a constructor that takes arbitrary arguments, like new(T1, T2) or even just new(int, float). Wouldn't that be great?

# Quick Post: Currying Func (and variants)

I recently wanted to curry a Func (and variants). Here's how I ended up doing it:

public static Func Curry<TCurried, TResult>(this Func<TCurried, TResult> fn, TCurried arg) { return () => fn(arg); }   public static Func<T, TResult> Curry<TCurried, T, TResult>(this Func<TCurried, T, TResult> fn, TCurried arg) { return param => fn(arg, param); }   public static Func<T1, T2, TResult> Curry<TCurried, T1, T2, TResult>(this Func<TCurried, T1, T2, TResult> fn, TCurried arg) { return (param1, param2) => fn(arg, param1, param2); }

This allows you do do some fun stuff!

# Quick Post: Sum Table

If you have an array of values, how can you take the sum of any range of them in constant time? With a sum table!

Here's a formal definition:

Given a list of values $v_0, ..., v_n$, then a sum table is a corresponding list $s_0, ..., s_n$ such that $s_i=\sum\limits_{j=0}^i v_j$. That is, each value in the sum table is the sum of all the values up until that index. Then, the sum of values between any two can be easily found: $s_{ij} = s_j - s_i$.

# Real-Time Clouds Pt 2: The Actual Cloud Part

In part one of this thrilling diptych I unveiled my beautiful classic noise function on the world using many many lines of someone else's code. Well in this post I hope to further bedazzle you by actually using that noise function-- and even some of my own code. W00t.

## Old Man Perlin and His Noise

I don't actually know how old Perlin is, but I'm going to guess the he has, at one time, worn really big sunglasses that even block sun from entering sideways and he probably has pretty full facial hair. He likes to play board games, can only handle two beers in a sitting, and can do useless things with Arduinos (oh crap, I'm describing myself-- except for the old and bearded part).

Perlin came up with a great idea: for a classic noise function $f$,

That is, he composed multiple frequencies of classic noise together, in what's called a fractal sum. In order to get real-time clouds, all I really need to do is to translate my classic noise function whilst summing, then translate the entire sum as well. It's exactly as easy as it sounds.

uniform float _T;

float fractalSum(float2 Q) {
float value = 0;

value += noise(Q / 4) * 4;
value += noise(Q / 2) * 2;
value += noise(Q);
value += noise(Q * 2) / 2;
value += noise(Q * 4) / 4;
value += noise(Q * 8) / 8;
value += noise(Q * 16) / 16;

return value;
}

half4 frag(v2f i) : COLOR {
float value = fractalSum((i.uv + float2(_T, _T))* 32 + 240);
return half4(value, value, value, value);
}

And the C#:

private float _t = 0f; public float translationSpeed = 0.0001f; void Update () { _t += translationSpeed;   MeshRenderer renderer = gameObject.GetComponent().material.SetFloat("_T", _t); }

As you can see, I've added a float _T, and a function called fractalSum() to the fragment shader. The UVs are simply translated by an amount _T so that the clouds scroll across the material. In fractalSum(), you can see the summation of multiple frequencies of classic noise, from $i=-2...4$. The result:

At this point, I've got static clouds translating, but I want the clouds to actually morph as the move and I want to be able to change the color of the clouds. It takes just a couple of tweaks to my fractalSum and frag functions:

float fractalSum(float2 Q) {
float value = 0;

value += noise(Q / 4 + _R1) * 4;
value += noise(Q + _R2);
value += noise(Q * 2 + _R3) / 2;
value += noise(Q * 4 + _R4) / 4;
value += noise(Q * 8 + _R5) / 8;
value += noise(Q * 16 + _R6) / 16;

return value;
}

half4 frag(v2f i) : COLOR {
float value = fractalSum((i.uv + float2(_T, _T))* 32 + 240);
return half4(value, value, value, value) * _Tint;
}

The values _R1 through _R6 are random offsets, which makes the clouds change shape while they are translating. Additionally, I added a simple _Tint property so the color of the clouds could be changed easily.

For the input values of _R1 through _R6, I have six PRNGs that pump random variations into the shader. This could just as easily be one PRNG, but hey, this seems cleaner.

private System.Random[] _prngs = new System.Random[6] { // just a few primes for seeds new System.Random(8887), new System.Random(1109), new System.Random(400157), new System.Random(200159), new System.Random(299807), new System.Random(499787) };   // to keep the current values private float[] _rs = new float[6];   // time private float _t = 1;   public float translationSpeed = 0.0001f; public float morphSpeed = 0.001f;   public bool translationEnabled = true; public bool morphEnabled = true;   void Update () { MeshRenderer renderer = gameObject.GetComponent();   if (translationEnabled) { _t += translationSpeed;   renderer.material.SetFloat("_T", _t); }   if (morphEnabled) { for (int i = 1; i &lt; _prngs.Length + 1; i++) { _rs[i - 1] += (float)_prngs[i - 1].NextDouble() * morphSpeed; renderer.material.SetFloat("_R" + i.ToString(), _rs[i - 1]); } } }

## A Simpler Approach?

That was a lot of work for some decent looking clouds. As I commented in part one, one way to simplify this whole process is to generate the gradient lattice texture in the fragment shader rather than on the CPU. Additionally, this frees up texture space. Lucky for me, my coworker sent me a link to a nice example of just such a method:

float4 textureRND2D(float2 uv){
uv = floor(frac(uv) * 1000);
float v = uv.x + uv.y * 1000;
return frac(100000 * sin(float4(v * 0.001, (v + 1.) * 0.001, (v + 1000) * 0.001, (v + 1000 + 1.) * 0.001)));
}

Now, I can replace tex2D calls where I sampled a texture, with a function that mimics the same behavior. The resulting noise function looks pretty similar to real classic noise:

And the resulting clouds look pretty good too:

I'm actually pretty surprised with the result. The clouds do look a bit different, but I'm not quite sure what about them is strange. If you look at the two screenshots side by side, you'll see what I'm talking about. Using the function that emulates the texture sampling results in rougher Perlin noise, but it's much simpler to implement.

I'm not crazy about my clouds, but they are a decent start. I'd be interested in introducing volume so the clouds don't look so sharp. I'd also love to try simplex noise rather than classic noise when I've got some spare cycles.

# Real-Time Clouds Pt 1: A Study of Noise Functions

## Where to begin?

As a previous post pointed out, I've been working my way through "Texturing and Modeling: A Procedural Approach" and it's blowing my dang mind. For instance, did you know that Scorched Earth is actually not the best example of procedural terrain generation? Not only is this book all fun and mathy, it also has pictures, so it qualifies as something I can wrap my brain around.

Anyway, one of the first sections in the book is dedicated to the study of noise functions, since noise functions are the basis of good procedural texturing and modeling. This class of functions, at their most basic, simply generate evenly distributed random numbers. Easy, right? Turns out, not so much. The book quickly informed me that that's not all a noise functions needs to be useful. There is a pretty long list of requirements that make up a good noise function.

## Classic Noise

What I don't know about noise could fill a book. Lucky for you, however, I'm reading that book right now.

Apparently most noise functions fall into a large category called lattice noise. The idea behind lattice noise is simple: evenly distribute PRNs (pseudo-random numbers) over an integral field, then smoothly interpolate between them. Nifty idea, right?

I implemented a lattice noise function that bilinearly interpolated between integral tuples, but it was gross. It looked like a bunch of squares, because, although it was continuous, it was not C1 continuous, so you could see dramatic changes along the bounds of each cell. Most of the time you would instead cubically interpolate or use a spline-- both of those methods produce good C1 continuous noise.

Anyway, there is a subclass of lattice noise called gradient noise. In this approach, imagine that instead of a PRN at each lattice point, you assign a scalar vector representing a gradient. A gradient is a vector that points in the direction of the greatest rate of increase, with the magnitude of the increase. So, really when you define a gradient field, you can kind of imagine that you're defining a surface, with each lattice point pointing up the steepest incline (this is not mathematically precise, but it helps me visualize). Now the interpolation is between gradients instead of lame old PRNs.

Interpolation over a gradient field is usually done a bit differently than in many other lattice noises, but it's still familiar as it's a form of bilinear interpolation (so it's dang cheap). Since each lattice point is a vector, you linearly interpolate between the dot products. Here's some source (in CG)-- it'll all make sense in a minute:

float noise(float2 P) {
// Integer part, scaled and offset for texture lookup
float2 Pi = ONE * floor(P) + ONEHALF;

// Fractional part for interpolation
float2 Pf = frac(P);

// Noise contribution from lower left corner
float2 grad00 = tex2D(_PermTexture, Pi).rg * 4.0 - 1.0;

// Noise contribution from lower right corner
float2 grad10 = tex2D(_PermTexture, Pi + float2(ONE, 0.0)).rg * 4.0 - 1.0;
float n10 = dot(grad10, Pf - float2(1.0, 0.0));

// Noise contribution from upper left corner
float2 grad01 = tex2D(_PermTexture, Pi + float2(0.0, ONE)).rg * 4.0 - 1.0;
float n01 = dot(grad01, Pf - float2(0.0, 1.0));

// Noise contribution from upper right corner
float2 grad11 = tex2D(_PermTexture, Pi + float2(ONE, ONE)).rg * 4.0 - 1.0;
float n11 = dot(grad11, Pf - float2(1.0, 1.0));

// Blend contributions along x
float2 n_x = lerp(float2(n00, n01), float2(n10, n11), fade(Pf.x));

// Blend contributions along y
float n_xy = lerp(n_x.x, n_x.y, fade(Pf.y));

// We're done, return the final noise value.
return n_xy;
}

// Improved fade, yields C2-continuous noise
return t*t*t*(t*(t*6.0-15.0)+10.0);
}

Since you were wondering, no I didn't write a large portion of this code. It's "adapted from" (stolen from) Stefan Gustavson's implementation. See what's going on here? It looks tricky but it's not. You take the fractional part and you use dot products to find your four values to interpolate between. Then you bilinearly interpolate between them with a slick fade function for kicks (or, rather, so it's C2 continuous).

What I'm leaving out here, is how to generate the lattice. As you can see in the code above, my lattice is passed in to the shader as a texture. How do I generate this lattice? It's actually not that simple. Here's the accompanying C# code to generate the texture:

private static int[] PERM = new int[256]{151,160,137,91,90,15, 131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23, 190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33, 88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166, 77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244, 102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196, 135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123, 5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42, 223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9, 129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228, 251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107, 49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254, 138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180};   private static int[,] GRAD3 = new int[16, 3]{ {0,1,1},{0,1,-1},{0,-1,1},{0,-1,-1}, {1,0,1},{1,0,-1},{-1,0,1},{-1,0,-1}, {1,1,0},{1,-1,0},{-1,1,0},{-1,-1,0}, // 12 cube edges {1,0,-1},{-1,0,-1},{0,-1,1},{0,1,1}}; // 4 more to make 16   public static Texture2D GenerateValueNoiseTexture() { const int dim = 256; Texture2D tex = new Texture2D(dim, dim); Color32[] colors = new Color32[dim * dim];   for (int i = 0; i &lt; dim; i++) { for (int j = 0; j &lt; dim; j++) { int offset = i * dim + j; int value = PERM[(j + PERM[i]) &amp; 0xFF];   colors[offset].r = (byte)(GRAD3[value &amp; 0x0F, 0] * 64 + 64); // Gradient x colors[offset].g = (byte)(GRAD3[value &amp; 0x0F, 1] * 64 + 64); // Gradient y colors[offset].b = (byte)(GRAD3[value &amp; 0x0F, 2] * 64 + 64); // Gradient z colors[offset].a = (byte)value; // Permuted index } }   tex.SetPixels32(colors); tex.anisoLevel = 0; tex.filterMode = FilterMode.Point; tex.Apply();   return tex; }

What is going on here? That's a very good question. Essentially, we have engineers getting fancier than need be.

PERM is a sequence of random integers from 0 to the dimension of the texture, which must be a power of two, in this case 256. Inside the for loops you can see some fancy nonsense going on that relies on the fact that i % 256 is equivalent to i & 256. PERM is mapped onto itself so that with a short sequence of integers, you can generate a long list of PRNs. Then, that value is mapped into GRAD3, which provide gradient values for rgb. The table of gradients doesn't actually need to be that long, but it lists all possible component gradients for an integral field.

Whew. Lots of stuff, right? Well here's where we're rewarded:

Look at that beautiful noise.

## Further Thoughts

There are several thoughts I'm left with for potential future enjoyment:

1. What I've read suggests some potential problems with classic noise. As you can see in the screenshot, it's clear that this noise is axis aligned (see the underlying lattice? even a little?). What would be really nice is if there were no axes present in the final noise function. Turns out, Mr Perlin himself noticed this too and invented a whole new kind of noise called simplex noise to solve some of the problems associated with this (there were other reasons too). I should do a dive into simplex noise in the future.

2. After showing this to one of my coworkers, he suggested that I actually generate the texture on the shader as well. Or, rather, implement a function that acts like I'm sampling from a texture (i.e. my lattice lookup table). I think I may actually try this one soon...

In my previous post, I described a process for generating terrain on a spherical mesh. As the screenshot showed, the algorithm worked pretty well to give you a lumpy looking planet or asteroid. But-- what if you want it to look like Earth? Green continents, blue water? You know the Earth I'm talking about.

Well here's my first attempt:

The caption says it all.

## How does he do it?

In order to get this Earthy feel I really only needed a proper surface shader, since the mesh geometry is uniform across the surface regardless of the bumps and whatnot. First, my properties:

_Center ("Center Point", Vector) = (0, 0, 0, 0)
_OceanColor ("Ocean Color", Color) = (0, 0.1, 0.7, 1)
_LandColor ("Land Color", Color) = (0, 0.7, 0.1, 1)
_BlendDistance ("Blend Distance", Float) = 1
_BlendThreshold ("Blend Threshold", Float) = 0.001

_Center defines the center point of the sphere. This is so that I can calculate how high a vertex is, i.e. its distance from the center of the planet. Then there are colors for the ocean and water. _BlendDistance essentially defines sea level, and _BlendThreshold defines the distance to blend across. These default values are what you see in the screenshot.

Then we define a vertex program:

void vert (inout appdata_full v) {
float dist = distance(_Center.xyz, v.vertex.xyz);

// flatten the oceans
if (dist <= _BlendDistance) {
v.normal = normalize(v.vertex.xyz);
v.vertex.xyz = normalize(v.vertex.xyz);
v.vertex.xyz *= (1 - _BlendThreshold);
}
}

Stupid simple. Essentially you normalize any vertices that are below sea level, thereby smoothing out the oceans. You also want to correct the normals. Finally, we define a surface shader:

struct Input {
float3 worldPos;
};

void surf (Input IN, inout SurfaceOutput o) {
float dist = distance(_Center.xyz, IN.worldPos);
float startBlending = _BlendDistance - _BlendThreshold;
float blendFactor = saturate((dist - startBlending) / _BlendThreshold);

half4 color = lerp(_OceanColor, _LandColor, blendFactor);

o.Albedo = color.rgb;
o.Alpha = color.a;
}

I've included the Input struct to show that the world position needs sent in as well. This is so that I can apply the correct colors at the correct distances. I use a simple lerp and apply it to the albedo which colors the earth appropriately.

Simple as that!

## Further Thoughts

The only problem with this method, is that it doesn't give a ton of definition unless you have an extremely high resolution mesh. The mesh above has over ten thousand vertices. For oceans, which only need to look spherical, that's probably enough, but if you look at the areas where the continents meet the sea, it would be nice if there was more geometry to make the transitions smoother.

An interesting study I may pursue later is iterating over the edges of the land and adding more geometry.

Another idea, which dawns on me now that I'm flattening out oceans in the vertex program, is putting the procedural transformations of vertices actually in the shader, rather than transforming in an iterative fashion on the CPU.

Anyway, just a couple ideas.

# Book Review: Effective C#

It's a fact that there are just too many crappy technical resources out there. It's also a fact that there are probably too many good ones to read in any one lifetime (I've still only made it through book one of Euclid's Elements-- and, embarrassingly, I've never even started my copy of Principia Mathematica). Anyway, I thought I'd start chronicling some of the articles and books I've read lately, partially for my own reference and reflection, but mostly for your edification, my faithful reader.

# Effective C#: 50 Specific Ways to Improve Your C#

Score: 10/10

I had a crippling internal debate when deciding whether or not this book should be the first I review. Actually, let me first give you a quick synopsis of ninety percent of all books on programming languages:

1. Explain reason there needed to be another book on [language/platform/library].

2. Use a bad metaphor to explain what a [variable/data type/array/etc] is.

3a. If the language is classical, explain how to write a Car class, with associated Wheel, Engine, and Trim classes.

3b. If the language is not classical, explain how to write something that emulates a Car class, with associated Wheel, Engine, and Trim abstractions.

It's sad but it's true. And there are a bunch of chapters you skip in the middle that show you how to start a new project in some IDE you're not using.

Back to the internal debate I was having.

You see, this book has none of these qualities. Not once does it give you a horribly misguided definition of object oriented programming (if you haven't read this definition from Alan Kay, you haven't lived). In fact, I can safely say that this book is not like any other technical book I have ever read. It is, quite possibly, wholly perfect.

When I finished chapter two I looked up and realized I was curled up on the floor, completely naked, in a pool of my own tears (much like after watching the last episode of Battlestar Galactica). Used tissues were scattered about like bits of wrapping paper on a lazy Christmas afternoon.

And I had 24/25ths of the book remaining.

My theory requires a bit more research, but I can tell you with well over 99% certainty, that Bill Wagner is one of the mighty men from of old--practically a god himself, spawned by whatever deep magic makes computers spend their days humming along, processing and transisting and whatever else they do.

Each chapter is enthralling. Seriously. I'm getting chills just thinking about it. I've read several of the chapters two or three times.

Wagner makes his points in profoundly wise ways. He uses well chosen verbiage when he discusses his topics, making sure to point out obvious strategies, preferred strategies, and when even the preferred strategies go awry.

He explains how the compiler makes its decisions, and lets you know exactly how much smarter the C# compiler engineers are than you. He dives deep into how the CLR thinks, lives and breathes. He walks step by step through CLR operation and gives you a good understanding of why things are the way they are, and why the decision was made that it should work that way.

So, when I say internal debate-- I mean, I didn't want this to be the first book I'd ever reviewed simply because it's all going to be downhill from here.

I'm telling you, if you want to step up your C# game, go buy this book. Expense it to your company. Read it through, then read it again.