I have less than ten hours to teach a dozen surly high schoolers to go from opening their notepads to writing Tetris. This is the objective: to give them enough HTML, CSS, and Javascript to be able to write a very simple video game.

Here’s what I told them:

A high school programming class and a high school cooking class are more or less the same. At the end of a cooking elective, if you followed all the steps you were required in class, you go home with two things: a cake, and a recipe for cake. All of you are going to eat the cake. Some of you might try to make the cake a second time. A few of you will wonder, “If I change this chocolate to strawberry, and make the cake again, will it work?” And one of you might go on to be a professional baker.

The same thing is true of this class. At the end of a class, you’re going home with a game written in HTML, and the source code to that game. The game will be fully playable. You’ll even have the source code, so you could try transferring the game to other computers to see how it works there. A few of you will look at it and wonder, “If I re-arrange these things, can I turn Tetris into Pac-Man?” (The answer is “Yes,” by the way.) And one of you might go on to be a professional software developer.

Oddly, the teacher who organized the elective tells me that that’s the best and most succinct description of what an elective like this is trying to accomplish. So I guess I’m doing something right.

Of course, there’s already that one guy who has a three-d game written in Unity, and took this class so he could learn how to put up a website about it…

As most people know, I’m a Lisp fanboy. Which is somewhat surprising as I tend to think that it’s one of my weakest languages, and whenever I’m reminded that Common Lisp is a Lisp-2 I get the heebie-jeebies; I really like single namespace languages. (I tend not to think of C as a namespace language; it’s really just assembly language with a nice syntax, and C++ is C with a ridiculous preprocessor.)

One of lisp’s classic problems is that it’s utterly reprogrammable. With the exception of a scant handful of absolutely essential core syntax terms (classically: cons, car, cdr, cond, eq, atom?, lambda, set!, quote) everything in the Lisp language is accessible and re-definable by the programmer. With macros you can add syntax that re-writes, mangles, and just plain defies scope and closure, that reifies complex procedures into one-word functions, and just plain lets you make one language into a different language. Lisp would be cool, except every Lisper’s Lisp is a little bit different from every other Lisper’s Lisp. It’s like the C++ problem of generics versus templates versus the STL. C++ has those problems because each of those topics is so huge someone who’s mastered one has little mental bandwidth left to master the others; in Lisp, those kinds of issues arrive because the developer wants them to.

So imagine my horror when I discovered that Perl 6 not only has the same capability, but allows you to define these new operators in any position in the syntax. Just look at this example:

sub infix:<¯\(°_o)/¯> { ($^a, $^b).pick }
say 'Coke' ¯\(°_o)/¯ 'Pepsi';

See that ‘infix’ thing there? That tells the Perl parser that you’re about to define a new kind of syntax. Your choices are: infix, prefix, postfix, circumfix, and postcircumfix. If your use case mixes your syntax with other syntax and the precedence isn’t correct, you add “is looser than” or “is tighter than (other syntax)” to the subroutine’s interface definition!

I loved Perl 4. Perl 4 put food on my family for six solid years. Perl 5 seemed like a pretty good idea until Moose and Catalyst made it feel like Haskell without the safety rails. I like Lisp, and if you can tinker with advanced grammars at least you know there’s only one canonical function position and absolutely no precedence wrangling needed at all.

There’s an old joke that Perl is a write-only language; six months after writing it, even the original developer has little idea how to understand what he wrote. With the capability to arbitrarily re-define precedence order, syntax positioning, and even Perl’s basic grammar for your precious, delicate snowflake of a use-case, it seems that Perl 6 really intends to make that ancient joke a reality.


JSFiddle as Teaching Tool

Posted by Elf Sternberg as Uncategorized

The last time I taught the high school "web development" class, I struggled with the resources available to me. The school didn’t have space to host student work; the Chromebooks available didn’t come with a text editor (a goddamned text editor! The most basic editor on the system is RTF!); the Chromebooks had also had Page Inspector disabled, which has to be the silliest administrative decision ever made. There is literally nothing malicious you can do with the Page Inspector. I can only imagine that it was diabled to prevent kids from learning how the web works.

Late in the quaterly cycle I started to teach a little Javascript, and for that I remembered JS Fiddle. Since it has working HTML and CSS panes, this time I started with it.

That made a huge difference. By the end of the very first class most of my students were monkeying around with CSS and HTML, moving things around, changing color, scaling text. I’m going to have to up my game if I’m going to have enough material for these kids even with one week less than last quarter.

It’s still very limited as a debugging tool; without Page Inspector we’re either going to have create our own debugging windows or spew message to alert(), which is never any fun at all. That’s old school debugging. But it’s really better than nothing, and it makes teaching programming possible even at the high school level.

Akin’s Laws of Engineering apply strictly to physical systems. Dave Akin was an engineer at NASA who specialized in designing launch vehicles, and his laws apply to build things that go into space. Many of his laws he attributes to other people, but one that is his own is

Any run-of-the-mill engineer can design something which is elegant. A good engineer designs systems to be efficient. A great engineer designs them to be effective.

His example is: an ordinary city has an elegant water system; New York City has an efficient water system; Rome has an effective water system (parts of it date back to Julius Caesar and are still in use!).

I’m quite certain that the designers of the Roman and New York water systems wanted a system that worked and engineered their way into elegance along the way.

The thing that gets me about Akin’s Law of Elegance is that it seems to me to be exactly backward, on the one hand, and to be an example of why Go is popular and Haskell is still struggling on the other. Javascript’s map/reduce/filter are examples of things that are both elegant and effective: they reduce the messy and chronically off-by-one for loop filled with expressions and allocators into something much more readable: (expression, and things to express on). They trade those for efficiency, but software has a lot more ‘give’ than launch vehicles, even in something as small as a watch. If you want all three, then I have to suggest Lisp or Scheme: you can extract the sequencing and allocation out of the passes into a single, transduced pass without function calls, saving yourself a lot of memory and CPU.

Given that fact that so few developers care about elegance, I really wonder if we even have systems that are efficient and effective. Go is “effective” in that it trades developer cycles for CPU cycles, but the things written in it have no long-term guarantees of maintainability; Go creates the illusion of this with a hard style guide, but style and elegance are still too different things: you can hire someone to buy you stylish clothes, but if you don’t walk in them well elegance will be beyond you.

Then again, most developers I know still wear a t-shirt and jeans to work, so I don’t expect elegance to be a trend anytime soon.

Pandastrike has a really good article called Facebook Relay: An Evil And/Or Incompetent Attack On REST, in which the author basically takes Facebook to the woodshed for not understanding REST, trying to break REST, and generally being your classic embrace / extend / extinguish (or etouderie, a beautiful word that has sadly fallen out of the English lexicon) big company imposing its will on everyone else.  As a graphQL fan, I wanted to like Relay, but every time I played with it my principle reaction was, “Okay, what is this really for?”  Pandastrike goes on to say that it’s good for only one thing, namely social networking data at the massive scale Facebook faces.

But, Pandastrike make one really terrible faux pax of their own in the article.  They make a point of quoting Ray Fielding, but then in the section on REST endpoints and type safety, write:

Although JSON Schema is not part of HTTP proper… if you use content types correctly, and also use JSON Schema with JSCK, you get strong typing over HTTP.

This is true, as far as it goes.  But it misses two incredibly important parts of Ray Fielding’s work, and makes me suspect their intentions.  JSCK, you see, is a product produced by Pandastrike.  And Fielding himself has said that doing REST with JSON is incredibly hard.  So hard, in fact, that the original work in REST mentioned that the transfer of representational state automatically implied hypertext as the representative of state transfer.  JSON is a terrible tool for hypertext.  You know what’s a great tool?  HTML: Hypertext Markup Language.  It’s not just for the browser and the page itself, it’s for every transaction that you commit between the browser and the server, and it carries with it all the metadata needed to make sense of the content.  Even better, unlike JSON, HTML has its own type checking mechanism that is part of the HTTP/HTML dual standard: its DTD, or Document Type Definition.  You’re not required to use the whole HTML standard, and you can even use XML with a stripped-down version that still comes with a DTD.

Pandstrike goes on about Facebook’s raking attack on a behavior scheme that’s been around for, oh, call it ten years.  But HTML and DTDs have been around for twenty years.

I’ll be fair: Working with HTML and XML on the browser is painful compared to JSON.  It uses more CPU to render and it takes more tooling to program correctly.  Nobody does it that way.  But to ignore it and imply you have a magic solution to an unsolved problem is to be as deceitful as the people you’re criticizing.


This is my very simple secret weapon in doing complicated data transforms on the client side, which I do a lot of when I’m working with Splunk data.

_ = require('underscore');

    makerail: function(predicate) {
        if (_.isUndefined(predicate)) {
            predicate = _.isUndefined;
        return function() {
            var args = arguments;
            return function() {
                var result = args[0].apply(this, arguments);
                if (predicate(result)) {
                    return result;
                for(var i = 1, l = args.length; i < l; i++) {
                    result = args[i].call(this, result);
                    if (predicate(result)) {
                        return result;
                return result;

_.mixin({rail: _.makerail()});

In its default configuration, rail() calls each argument in the sequence, much like compose(), passing to the first function in any arguments to the resultant composed function call, then passing the result of that function to each subsequent function. It’s basically like the arrow operator in Lisp, performing each step in a left-to-right fashion, rather than the right-to-left of underscore’s compose function. However, it also *terminates* the moment any function produces undefined as a result, shorting out the composition and returning undefined right then.

It’s possible to call makerail with an alternative predicate:


var railway = _.makerail(function(result) {
    return (result instanceof Error);

In this example, makerail() is describing what F#calls “railway oriented programming”, whereby when any function returns an object of the exception class, all the other functions are immediately skipped without generating an exception and all of the performance or flow-control headaches of invoking the exception handler.  It’s actually rather nifty, and encourages a much more holistic approach to dealing with long data flows.

The past two weeks I have volunteered a couple hours of my time at the local high school to teach a small class of the kids in the fine art of HTML and CSS. I have only one hour of interaction with the kids each week, but I spend about two to three hours beforehand prepping materials and getting ready.

The first class was a blitzkrieg of ideas. A bit of “A website is a collection of web pages around an idea” and “A webpage is a chunk of HTML filled in with other stuff to give you one view of the idea” and so on. A map of a fairly complex web production environment: The “business thing”, the business logic, routers, databases, HTML, CSS, Javascript, Canvas, SVG, WebGL, etc. etc. etc. The number of websites I’ve built where “the business” was a completely separate server with a simple frontend written in Django, Catalyst, or Express shows the maturity of the model.

And then we hit the wall. As a demo, I wanted them to all open up a file, edit an eight-line HTML file (HTML, HEAD, TITLE, BODY, PARAGRAPH, CONTENT, plus closures), save it, and view it in the browser.

The only tool these kids have for this is a bunch of Chromebooks. Most of them can’t afford laptops. The school provides them the Chromebooks, and their own Google Drive locations and accounts. So that’s what we had to work with.

Problem number 1: These kids have no idea what “plain text” is. Every tool they’ve ever used comes with options to pick a font, do bold, do italics. When I asked them how the computer “knew” to use bold or italics, they shrugged. I had to explain that the zeros and ones saved to their storage contained extra zeros and ones to describe the decoration, the bolding, the italicizing, the font selection. We were going to add the decoration back ourselves, using HTML. But before we could do that, we needed to use the most simple storage format there was, the one with no decoration, the one where every character you saw was the same the one you saved, with no additions, to annotations, no decorations.

The ease and convenience of RTF and other “printable” or “web-ready” formats has completely ruined these kids’ understanding of what actually happens underneath the covers.

Problem number 2: These kids have no way to correlate files to URLs. The lack of a traditional storage medium, and the introduction of Google Drive, means these kids have no mental map for associating a “web location” with a “filesystem location”. Everything is seemingly ad-hoc and without a real-world physical reference. This is probably the lesser problem, as storage is already very weird and about to get weirder, and we’re all just going to have to live with that fact.

The second class went better, and I went much, much slower. This is a hands-on class where I lead them through a couple of exercises and help them figure out weird things they can do with HTML. We figured out work-arounds for Google Drive and practiced our first, basic HTML, like headers, lists, and so on. And I gave them their first styles. They had fun figuring out random colors that seemed to work for their backgrounded objects.

There’s that Simpsons episode where some adult male says “Am I out of touch? No! It is the children who are wrong!” Well, maybe I am out of touch, but it really seems to me that Chromebooks may be fine for accessing the World Wide Web, but as a tool for developing on the web, they’re more a hindrance than a help.

Seeing as it’s January, that means that we go through many accounting phases about what happened last year. Most of the ones we go through publicly are ones about how we spent our time: did we work out enough, write enough, study enough, love enough. Others we Americans tend to go through with a deep sense of reserve and privacy, mostly about money.

Last year I made what is, to me, an insane amount of money. Far more than I ever thought I’d be making in any given year. And it’s more than the year before that; in fact, it’s been steadily going up every year since the 2008 recession. Even after adjusting for inflation, I’m still making more per year than my father did, which I have to say is utterly mind-boggling, since he was a radiologist, a pioneer in nuclear medicine, and a real estate mogul all at the same time.

Yet, I’ve never felt the connection between work and reward feel more tenuous.

I’m currently in a large infrastructure position where, nominally, I was hired for my skills as a software developer, yet I now joke that I write code during the commute because they don’t let me write it at work. Instead, I manage configuration files. I worked on fleshing out a platforming initiative for a massive chunk of network monitoring software; that platform is now mature enough that the skills I initially brought to the table are no longer needed. The real skills I spent twenty years acquiring are now being allowed to decay while I fiddle around the edges of an impressively large but intellectually dull enterprise software product.

On the other hand, because it is a network monitoring tool that helps prevent enterprise-scale service failure, finacial loss, and outright fraud, there is an unbelievable amount of money sloshing around the sector, and my company has seen fit to reward me repeatedly with bonuses, raises, and stock options.

And yet, I know I don’t work nearly as hard as the average apple picker in the agricultural regions just east of where I live. I am not as ambitious or go-getter as many of my co-workers; I’m consciously on a daddy track and I’m not going to sacrifice my family’s time to my employer. I do my job, hopefully well, and go home at the end of the work day. The maturity and prosaic nature of the project, I confess, leaves me cold with desire to push the state-of-the-art. (This is the flipside of my time at IndieFlix or Spiral Genetics, where I worked like a dog and put in evenings because the project was flippin’ cool.)

I really don’t have ambition to “maximize shareholder value” except to the point that I’m currently a shareholder myself. I have an ambition to make the world a better place. Every job I’ve had of the first type paid excessively well; every job of the last type was inspiring and made me feel good about myself.

When I read about that weird Silicon Valley meme that “we work hard, so our rewards are commensurate with what we do,” I have to shake my head and wonder: really? Canada’s Micronutrient Initiative’s costs about $200 million, and has prevented almost 400 million cases of life-threatening birth defects in India, Canada, and North Africa; Candy Crush is worth $7.1 billion, and I doubt it’s developers actually work as hard as the people hauling sacks of iodine crystals through the third world’s back roads.

The disconnect between effort and reward has never been as stark or as absurd as it is today. My experience is a microcosm of that disconnect. I’m happy to do my job, and happy to get paid to do it, but I can’t help but feel that there’s something very off about the relationship between the two.


Interview Question

Posted by Elf Sternberg as Uncategorized

I won’t reveal where or when I got this question, but it always amused me.  At the time, I answered it using Underscore and Coffeescript, which the interviewers allowed I was going to have access to… but here’s a pure ES6 solution.

The problem, simply stated, was “write a function that sums two polynomial equations and prints the results.”  They defined the format for the input this way:

// 5x^4 + 4x^2 + 7 
// 3x^2 + 9x - 7
var e1 = [{x: 4, c: 5}, {x: 2, c: 4}, {x: 0, c: 7}];
var e2 = [{x: 2, c: 3}, {x: 1, c: 9}, {x: 0, c: -7}];

They were kind enough to let me code on my keyboard.  My answer is rather dramatic.

// Reduce any list of equations into an array of maps of exponent:coefficient
var eqns = [e1, e2].map((a) => a.reduce((m, t) => { m[t.x] = t.c; return m; }, new Object(null)));

// Find the largest exponent among all the equations
var maxe = Math.max.apply(null, eqns.map((a) => Math.max.apply(null, Object.keys(a))));

// For the range (maxe ... 0), for all equations, sum all the coefficients of that exponent, 
// filter out the zeros, sort highest to lowest, create string representations, and print.
        Array.from(new Array(maxe + 1), (x,i) => i)
        .map((exp) => [exp, eqns.reduce(((memo, eqn) => memo + (eqn.hasOwnProperty(exp) ? eqn[exp] : 0)), 0)])
        .filter((e) => e[1] != 0)
        .sort((e) => e[0])
        .map((e) => e[1] + (e[0] > 1 ? 'x^' + e[0] : (e[0] == 1 ? 'x' : '')))
        .join(' + '));

The interviewer just stared at it, and stared at it, and said, “I’ve never seen anyone solve that in three lines.  Or that fast.”

I shrugged.  “It’s a straightforward map/reduce of the relationship between exponents and coefficients, removing any factors that had a coefficient of zero.  This seemed the least buggy way to do it.  The riskiest part of this equation is the mapping back to string representation.  The nice feature of this function is that if we generalize the first line over an arguments array, it works for any number of equations, not just two.”

He agreed.  They ultimately didn’t hire me.  I had a friend there, and he said, “They really liked you, but it was pretty clear you were already bored where you were and moving from one infrastructure job to another wasn’t going to change that.”  Sad but true.



Lisp In Small Pieces, Chapter 5: The Storage Story

Posted by Elf Sternberg as Lisp

One other thing about Lisp in Small Pieces chapter 5 jumps out at me: the storage story.

In the interpreter written for Chapter 5, some things are cons lists (most notably, the expression object you pass into the interpreter), and some things are lists, but they’re not built with car/cdr/cons.

In chapter 3, we built an interpreter that used full-blown objects, in which each object had a field named “other” that pointed to the next object; when looking up a variable or an unwind point, the search was an explict call: starting with the latest object, a search would begin down the chain for a match and, when found, would trigger either a memory retrieval or a continuation, at which point the interpreter would resume with the relevant memory or continuation. Each object had a “failure” root class that would throw an exception.

In chapter 5, it gets even more functional. Chapter 5 tried to define everything in the Lambda Calculus, which allows for closures, but doesn’t by default support objects. But Quiennec really wanted to teach about allocation issues, especially the boxing and unboxing of values, so to make that point, he created two structures: one represents variable names that points to indexes, and one represents an indexed collection of boxes. Lookup represents the Greek equation σ (ρ ν), which is basically that the environment knows the names of thing, and the store knows the location of things.

But in order to be explicitly literal, Quiennec goes full-on. Both environment and store are represented the same. He creates a base environment that looks like this:

ρ.init = (ν) -> "Variable name not found."

and then when we add a new variable name to the stack, we write:

(ρ, ν, σ, κ) -> (ν2) -> if (ν2 == ν) then κ(σ) else ρ(ν2)

. In this case, we call a function that creates a function that, in turn, says “If the name requested matches the name at creation time, return the stored store point (actually, continue with it), else call the next (deeper) environment, all the way down the stack until you find the thing or hit ρ.init”.

It’s a really cheesy ways of emphasizing that you can do Lisp in a full-on Lambda Calculus way, but you probably shouldn’t. It’s also completely dependent upon the parent environment to reap memory when you’ve examined the tip of an expression and have retreated back toward the base of the expression tree to proceed down the next expression.

Lessons here are about the Lambda Calculus, and about memory management. In the latter case, how hard it’s going to be if you want to do it the way the big boys do.  Garbage collection is hard.


April 2016
« Mar