24Aug

Pattern Matching And Lisp

Posted by Elf Sternberg as programming

No, I don’t mean regular expressions.  I mean that thing that you see in “modern” programming languages where you’ll have two functions of the same name that take different arguments do different things  We use it all the time.  The plus symbol, “+“, often has two separate meanings, and is often processed by two different operations:

add(number, number) -> mathematical addition
add(string, string) -> textual concatenantion

Now, most programming languages provide this differentiation by convention.  C++ is fairly unusual in that it permits it explicitly; you can use the same name for a function with different inputs (the arguments), and the C++ compiler will connect any calls to that function name correctly depending upon the arguments you provide at compile time.

But the modern languages like Haskell do this explicitly and without any special needs.  You can re-use function names all day, and as long as every function has unique input and output types, Haskell will associate the right functions in the right way.  (I think Haskell is annoying in that it doesn’t even matter what order you make these declarations in, but some people seem to like it.)

It turns out this behavior is, for programming, really old.  Like, 1960’s-era old.  The λ-calculus that McCarthy turned into Lisp back in 1960 described it.  McCarthy didn’t know how to turn pattern matching into a general-purpose feature, so we ended up with “if” statements and conditionals and all the rest to decide these things by hand, but the original math seemed to require automatic pattern matching.  Take the “sequence” operation: This is when a function has a list of things to do, several statements in a row, and returns the value of the last expression.  The math for this reads:

Ε[[π]]ρκσ = Ε[[π]]ρκσ

Ε[[π π+]]ρκσ = (Ε[[π]]ρ λεσ1.(Ε+[[π+]]ρκσ1) σ

The left side of the equation denotes “the meaning of…”  In this case, line one is “The meaning of a single statement is the meaning of a single statement after we’ve taken into account the heap and the stack.”  And line two is “The meaning of a single statement with more statements following is the meaning of the first statement, which we then discard to derive the meaning of the rest of the statements, all the while taking into account the heap and the stack.”  There’s actually more to that, as the environment and memory can change from statement to statement, but that’s already handled by our understanding of what ρ and σ mean.

In a common language like python or javascript, we would have an ‘if’ statement to say “If this is a single statement we’re done and return, else process the first and iterate.”   But in Haskell you could just write those explicitly, and it would work; the compiler knows exactly what you mean, and does the right thing, because there’s no ambiguity there at all.

I just wish I could write them in English, which is my native tongue, and not have to translate from Greek in my head.

Chapter 4 of Lisp In Small Pieces introduces side effects. It also introduces a (excruciatingly simplified) taste of what managing memory feels like, with a monotonically increasing integer representing memory we’ve requested, and a corresponding identity in the environment.

More interesting is the abandonment of the Object-Oriented approach in Chapter 3 for a completely functional approach in Chapter 4. Everything is a function with enclosure. When you want a number, you have to send the message (number sValue) to extract it. You can get typing information with (object sType). The “sValue” and “sNumber” are, in my code, simple Javascript objects of type Symbol, around which I’m trying to formalize symbols and quotations, so that going forward it’s easy to distinguish a Symbol as a unique type– doing so may remove an entire class of difficulty in the read() function.

Chapter 4 was supposed to also teach about memory management and variable “boxing,” but in fact it only contains a very cheap pair of stacks: one for the environment, the key of which is a name and the value of which is a number, and one for memory, the key of which is the number, and the value of which is the object: a symbol, a number, a boolean function or an arbitrary function. It makes a strong correspondence between this Lisp and the λ-calculus, and even shows how, with a few small extensions, the λ-calculus can accommodate assignments and similar side effects, although not necessarily externalities, like I/O. Extending this to a real memory management system is, I hope, not entirely left up to the reader as an exercise.

I’m still trying to figure out how to write this monoidically, such that I can “lift” (see: The Elevated World) all of my functions out of the “standard Lisp lists and things” world to the “lisp lists and things encapsulated with metadata about compilation,” which would enable me to add debugging, source code, and sourcemap information to the resultant product. Maybe I need to do a few more monoid/monad tutorials, the more hands-on ones, to wrap my head around lifting.

The source code for Chapter 4 of Lisp In Small Pieces, Coffeescript Edition is, as always, available on Github.

Chapter 5 basically reiterates Chapter 4, only it does so entirely in Greek. I’m only half-kidding; it’s mostly an attempt to formally described the interpreter covered in Chapter 4 using the formal language of an extended λ-calculus. It doesn’t have an interpreter of its own, except in a purely theoretical, head-in-the-clouds description of a λ-calculus-derived language written mostly in Greek symbols.  So there’s not going to be any source code associated with it.

In Chapter 3 of Lisp In Small Pieces, you are encouraged to write a lisp interpreter using continuations. Here’s what I learned as I implemented the chapter 3 examples in Javascript:

A continuation is a representation of the incomplete state of the program. For any evaluation being done, the continuation at that moment holds everything else that must be accomplished in order for the program to finish.

It sounds, odd, but it’s actually an easily understandable feature of any development environment with first-class functions and working closure– both of which Javascript has. Imagine the instruction print 4 * 2. At the very top, that breaks into “Evaluate four times two then print the result.” The part after ‘then’ is the final continuation.

So it breaks down to:

  • Wrap the “print something” as a FinalContinuation. Pass that continuation to the evaluate function with a pointer to the rest of the AST (Abstract Syntax Tree) to evaluate.
  • Evaluate what the ‘*’ means. This retrieves a native function that expects two numbers and returns a product. Wrap the FinalContinuation with a new EvFunc continuation.
  • Evaluate what ‘4’ means. Continue on to…
  • Evaluate what ‘2’ means. Continue on to…
  • Apply the function. This continuation has been built by the previous three. Inside the source code, you’ll see EvFunc, which calls EvArgs and passes in Apply, a continuation to run after the arguments have been evaluated. Each instance of “EvArgs” creates as “GatherArgs”, which appends the discovered arg into a list, and then passes the ‘Apply’ continuation forward. When the EvArgs discover no more arguments, it finally calls the ‘Apply’ continuation, which performs the primitive operation ‘4 * 2′, and then the Apply calls its continuation, which is the FinalContinuation, which prints the value passed by Apply.
  • As that was the FinalContinuation, the program terminates.

Rather than stepping through the program line by line, we step through continuation by continuation. Each line of a program becomes its own continuation, encapsulating the current state of the system, as well as all possible future states– at least abstractly. For a while, this was really hard for me to understand, but now that I’ve done the exercise, I see that the notion “the rest of the program” doesn’t mean the rest of the syntax tree all laid out and ready to trigger, but instead means “What we’ve done before,” (the current continuation), “What we want to achieve and what we haven’t processed yet that needs doing to acheive it,” (the continuation we’re about to build), and “How we bridge the two after processing this node of the AST” (the current evaluation).

Each continuation becomes a handler of its own momentary responsibility in the interpretation process and a wrapper around all the future responsibilities that have thus far been discovered during interpretation.

That seemed like a lot of work, and I wasn’t entirely sure of the benefits of it. On the other hand, continuations do allow for some features of programming that are much harder without them: blocks of code with early return, catch/throw, and finally.

Blocks with early return were easy. When you enter a block, an interpreter creats a new continuation: What do to after the block. Then it creates a continuation: What to do inside the block. It then resumes the inside one. If the block ends, or if return is used, the results are delivered as the value to be processed by the after continuation. Blocks with labels look up the stack where the prior continuations were stored, and when it finds a match, resumes the program using that continuation.

This is important: the stack’s purpose, which up to this point was strictly to maintain the identity and value of variables in a given scope, has a new responsibility: maintaining the integrity of the interpretation process itself. There is a new thing on the stack that is not a Symbol/Value pair! Or rather, it is a Symbol/Value pair, but not one you can dereference and manipulate on your own. Eventually, we learn that the classic call/cc (call with current continuation) does allow you to access it and run it whenever you want.

Catch/throw is more complicated. Throw statements can appear anywhere in the source code and abstract syntax tree (AST), and aren’t necessarily containerized within a catch block. When they trigger, a corresponding catch continuation has to be looked up and triggered. A catchLookup occurs up the collection of continuations (not the environment, note) until it gets a match or the program dies with an uncaught throw.

A protect is even more important. In Javascript and Python, protect is usually called ‘finally’, and it means the collection of statements that must be executed within the current continuation-managing context. So when the body of the protected statement ends in any way, we look up the collection of current continuations (remember, they’re wrapping each other up to the BottomCont, creating a sort of stack-that’s-not-a-stack) until we find the continuation in which the protect was created, then continue with evaluating the final body, then continue on with execution. Because any continuation can be wrapped this way, the base continuation has to be modified, and both the ‘block’ and ‘catch’ handlers have to also be modified to scan for protected evaluations.

This is fun. And kinda deep. It goes way more into detail than your usual “let’s write an interpreter!” that you find on the Internet, even if one of my side-excursions was to write the λisperator interpreter.

Even this code is twonky. Symbols are not handled well. There’s an arbitrary distinction between initial symbols and created symbols. It’s not as bad as the λisperator interpreter where the continuation designations were kinda arbitrary.

The oddest thing about this code, and it’s endemic in all of these “write an interpreter in Scheme” textbooks, is that it rides hard on top of the scheme garbage collection. When the environment stack pops down, or when a continuation is executed and unwraps, the wrapper is simply discarded, and the assumption is that garbage collection just works at the interpreter’s interpreter level. You start to notice it a lot as you go through the exercise.

We all have superpowers. One of my friends once said that my superpower was the ability to look at any piece of software and, within minutes, be able to describe with frightening accuracy the underlying data structures and fundamental internal APIs used to manipulate it. The other day I ran headlong into Airtable, and the underlying data structure was pretty obvious. The performance is amazing, but the underlying data is, well, simple.

A spreadsheet is a Cartesian grid of cells. But the interaction between the cells is a graph, in which some cells contain values, and other cells contain expressions that are dependent upon those values. A hypersheet extends this notion to include the idea that some cells contain, or are driven by, data stored elsewhere (and usually stored using an RDF triple).

So: An Airtable is nothing more than a graph database, backed by Postgres, in which triggers written in a convenient subset of a popular language (let’s call it Javascript) can be written by the community to facilitate the filtering and processing of hypertext content on a cell-by-cell basis.

The trick of a spreadsheet is in extending it conveniently. But there’s already a book that teaches you how to do cutting, copying, and pasting of content. It’s not a fantastic book; it’s too centered on its own interests to abstract spreadsheet notions well, and it’s too concerned with C# (although the F# asides are fascinating). It does cover the issue of relative-and-absolute ranges and how to cut and paste them without losing track of what they refer to, so that’s good enough.

Seriously: That’s about it. I’ve linked to every paper and book you need, every idea, to re-implement Airtable. All it takes is some engineering. And let’s be honest here: if you get the infrastructure for Airtable right, you get how to write Wunderlist, Trello, and Evernote for free.

Now, be fair: I have no idea if Airtable is implemented this way.  For all I know, this facile analysis might give the engineers there one heck of a giggle.  But if I was going to implement something like Airtable, these are the resources with which I would begin: A document store of databasess, a graph database of relationships, an abstracted RDBMS (probably Postgres) of content, a decent event manager using push as its paradigm for updating cells (RxJS, using React, or possibly just straight up Om), and a lot of interesting knowledge gleaned from papers on hypersheets and functional spreadsheets, with each cell decorated with an about field that identifies the uniform resource name of the object that cell represents.  Processing could be done client-side or server-side, depending upon the nature of the cell and the frequency with which in needs updating (RSS and Semantic Web objects would be updated less frequently), and it would be technically challenging but not impossible to discern which was which with a proper constraints-based system.

Oh, by the way, if you get the typeof/about/property triple correct for cell representation, you’re basically a few engineering puzzles away from replicating The Grid, as well.

I’m a dilettante in a lot of things software-development related, but I do love watching the world go by. Every once in a while I see two things happen at once and wonder why they aren’t aware of each other.

A few days ago I was reading a paper entitled: Buzz: An Extensible Programming Language for Self-Organizing Heterogeneous Robot Swarms. I don’t know anything about robot programming, but I have read (and implemented, at least in simulation) Tim Skelly’s description of “flocking” behavior for the 1980 video game “Rip-Off.” Buzz was a pretty good paper. It starts out with a fantastic idea: when a robot goes from being independent to being part of a swarm, it needs new relationship management features, which the Buzz VM provides through a combination of live tracking of swarm membership, neighborhood awareness, and “virtual stigmergy,” a way of emulating a buildup of sensory information until it crosses thresholds and causes new behavior in the entire swarm, emulating the way ant or bee hives collectively operate.

(Aside: The first video game to use stigmergy is PacMan: As PacMan moved, he left a “scent marker” on the square behind him that faded with every clock cycle until it reached zero; the probability that a ghost would turn was influnced by that scent trail. PacMan had easy-to-memorize patterns because the clock cycles were relatively large with respect to human reaction time and the probability had no randomization.)

But one thing that bugged me about Buzz was that the developers wrote their own language. It looks a little like Javascript, and a little like Perl, and a little like every other Algol-descendent. The author of Buzz even admit to hewing to “object-oriented programming jargon.”

And then a few days later I saw the paper that explained why I was so bothered: Draining the Swamp: Micro Virtual Machines as Solid Foundation for Language Development Swamp proposes that VM writers do too much with their language. A VM, they write, should deal with exactly three issues: memory, hardware, and concurrency. Buzz does that– and then it does so much more. Swamp says that everything else should be left to the language developers. Once you’ve got concurrency, memory (really, garbage collection) and hardware abstraction down in the VM, your VM should get out of the way.

If the writers of Buzz had gone down this route, they could have spent all their time making sure to get the concurrency issues around swarm membership and virtual stigmergy absolutely correct and performant, and allowed the language development guys in their team layer any language they wanted on top: Lua, Javascript, Lisp. Even their own Buzz.

So, Buzz, Mu, I must highly recommend you to each other.

Bring on the robot apocalypse.

TL;DR: Lisp In Small Pieces will get you up to speed much faster than Essentials of Programming Languages, but you’ll benefit from owning both.

As I’ve blogged before (and have much to discuss), I’ve been working my way through Christian Queinnec’s Lisp In Small Pieces, doing the exercise mostly in Javascript Coffeescript just to prove that it can be done in something other than Scheme, and to demonstrate that, for the most part, Javascript is a decent Scheme underneath its hideous Java-“inspired” syntax. I’m just now finishing up the text of chapter four. I must say that I never read a textbook this closely when I was a (presumably, therefor, terrible) university student. I’ve read and re-read each chapter to make sure I got what was being said before moving on.

Out of curiosity last night I also picked up my copy of Daniel Friedman’s Essentials of Programming Languages, which has been languishing on my shelf for about a year after I scored it at a used bookstore for cheap. It’s the second edition. I read through chapter two.

LiSP gets you into the nuts and bolts of interpretation and scope exceptionally quickly, whereas Essentials really wants you to understand the math and theory first. The result is that Essentials teaches you two different languages: the first is the language you’re going to create, and the second is the mathematical language you’re going to use to describe what you’ve created. LiSP doesn’t care much about the latter.

Quinnec will give you the vocabulary, but you won’t need it so much as you will need the code and your understanding of it. Eventually, you’ll come to appreciate the way you keep and maintain an environment, how you manage the stack and how you reference count within it, how side-effects are incorporated into the language and why they matter.

Oddly, by the time you’re 120 pages into both (about 1/4 of the way in), you’ve mostly got everything you need, but Quinnec makes you feel as if you’re ready to write useful DSLs and interpreters. Friedman is still hiding his map in his coat pocket and it’s not clear where you’re going from the text.

As I’ve pointed out, I’m hacking Quinnec’s code in Coffeescript, and Quinnec’s writing and examples make that feel like a viable option. Friedman seems excessively married to Scheme; not only are his examples in Scheme, but he emphasizes how perfect Scheme is to the research project his book represents and feels dismissive of any other approach.

Essentials is still a good book to have. Read Chapter 6 after reading LiSP Chapter 3: it’s all about continuation passing, and by chapter 3 of LiSP you’ve already written three CPS-based interpreters. Chapter 7 of Essentials covers type systems, which LiSP never really delves into with any depth. You might want to read Essentials Chapter 7 after you’ve finished LiSP sufficient to have a working Lisp, one that you can verify and validate by working you’re way through Friedman’s other excellent series, The Little Schemer, The Seasoned Schemer, and The Reasoned Schemer. Whether or not you want to go all-out and get the last book, The Little Prover, is a matter of taste.

“If you can’t explain it simply, you don’t understand it well enough.” – Albert Einstein

This quote annoys me. I’ve spent the last year trying to understand functional programming and this deep, mathematical concept called the monad. It turns out that monads are simple to explain: if you’re mathematically inclined and know your category theory, they’re a monoid in the category of endofunctors. If you’re a programmer, they’re a pattern for the augmentation of function composition.

Since I’m a programmer, the latter explanation appeals to me. (I’m also a mathematician, and I know what all the words mean in the theoretical explanation, but I don’t understand them well enough to use them effectively.) For your basic Python, Ruby, or Javascript developer (nevermind, shudder, PHP), you have to start with explain what function composition is, how you already do it without having a word or even a concept for it, why you want to do it more, how you work around problems involving collections and sequences, and then you have to explain what you mean by “augmentation,” and what it buys you, and how it fits into your existing mental framework of designing and writing software. I’ve been working my way through a series of exercises that have helped me understand what “augmented function composition” is and does for me.

But it took work. It took a lot of work. Even with twenty years of software engineering experience, it took hours of hammering away at the problem, day in and day out, for me to even start to see the benefits of using this pattern. I’ve now got a fairly solid understanding; I’m hoping to apply it to some personal projects in the hopes of reducing my error rate and speeding up delivery.

Lots of people have tried to “simply” explain monads to me. But to understand them, I had to put in the effort. I had to put in the hours. I had to do the work. Sometimes, no matter how well you understand “it”, no matter how simply you explain “it”, your audience simply doesn’t have the knowledge, the mental framework, necessary to understand what you’re talking about.

And some people, no matter how hard they try, will never have that framework. They don’t have a mind capable of grasping it. We would never ask Sarah Palin to try and understand multi-dimensional vector maths; she hasn’t got the mind or temperance for it. It’s not even that she won’t, it’s that she can’t. By nature or nurture, she hasn’t got the headaround for it, and never will.

It’s not about understanding or simplicity; it’s about your audience’s capability and willingness to comprehend.

23Jul

The Metadata Problem in Primitive Lisp

Posted by Elf Sternberg as Lisp

Yep, I’ve definitely gone down the wrong path with my current research project, which means I’ve probably wasted at least a day or two on it, which I find deeply frustrating.

As I’ve been working my way through Lisp In Small Pieces, I’ve been trying to push the boundaries of what the interpreter does in order to make it seem more, well, “modern” to me. The target language is Javascript, both in the interpreter and in the transpiler; I ultimately want to end up with something similar to Gazelle or Pixie, but with even more facility. The laundry list is very long.

The problem I have right now is Sourcemaps. Sourcemaps are a tool supported by all major browsers and Node/Iojs, that translate error conditions that occur in the running code into near-equivalent locations in the original source files. This was meant to support minimizing features like uglify or Google Closure, but it turns out to work just fine for transpilers.

The problem is three-fold: (1) the original Lisp syntax doesn’t support the carriage of metadata along with processing the the source, (2) Lisp macros mangle the source in unhealthy ways, (3) including the metadata necessary to support Sourcemaps makes the reader unsuitable to use as a general-purpose list-processing (i.e. “LISP”) utility for users.

I thought I had a solution using Javascript’s non-primitives for String and Number; that would have let me carry the values forward successfully. Unfortunately, the Javascript non-primitives are not automatically self-evaluating; (new String("foo")) != (new String("foo")), so comparisons and utility at run-time are severely impacted. Expectations are not being met. I know why this is the case, I’m just not happy with it.

An alternative is to encode the reader to pass back the sourcemap as a second AST, and have the reader traverse that as needed to report on state of the compilation.

The problem is that this is a cross-cutting concern, an aspect-oriented problem. Tracking which branch of the AST you just went down, or how far along the linked list that represents that branch, is not easily instrumented in Javascript. I want the evaluate core to operate as McCarthy described in 1959, but if run in a different context (oh, look, a monad!) also returns me optional sourcemap details about the evaluation pass.

I’m going to spend some of today reading through other folks’ solutions. The non-Lispy javascripts are no help; they have their own paradigms for ASTs that don’t involve the self-modifying capacities of Lisp.

15Jul

The Key To Coding is Re-Coding

Posted by Elf Sternberg as chat, programming

In a recent, great article in the New York Review of Books, “The Key to Rereading,” Tim Parks takes the usual metaphor of reading as “the mind is a key unlocking the meaning of the text” and turns it around:

The mind is not devising a key to decipher the text, it is disposing itself in such a way as to allow the text to become a key that unlocks sensation and “meaning” in the mind.

Parks also asserts that this unlocking process isn’t achieved upon a first read, but upon re-reading. Once, again, and again, and again.

The astonishing thing to me at least is that anyone actually needed to say this. Every person who lives in the world of mathematics (and computer science is a branch of mathematics, although if homotopy theory is correct every branch of mathematics is in fact a subset of computer science theory) knows this beyond any shadow of a doubt. I hinted at this earlier is my article “A-hah! Learning and Grind Learning,” after which someone said to me that the “A-hah!” moment only comes after the grind, that you’ve spent all the intellectual energy necessary to push the neurons into the right pattern, after you’ve felt those neurons reaching out to each other, after the whole of the pattern coheres after hours and hours of study and research and experimentation, there’s this sudden “Oh! That’s how it works!” moment.

Reading papers, writing code that does what those papers describe, tracing the meaning of that code as it does what it does, all leads to that moment when the key turns in the lock. When you get it. I liken it more to knowledge beating down the doors inside my head, relentlessly creating meaning where once there was none. That’s a slightly more violent metaphor than Parks’s, but it comes down to the same thing. The knowledge isn’t really in either the book or in your head. It’s when you’ve done the same exercise twice and in different ways that the noise of syntax and source code fades into the background, and the glittering beauty of the lambda calculi, or the sheer messiness of ad-hoc continuations like try/catch/finally, become utterly clear.

The other day, Jim Bird wrote a pretty good article on why there’s so much bad software out that, and the article laid the blame on bad managers. While I think Bird’s approach is correct, he’s actually missing one critical detail.

It’s not always that managers are “bad.” Bird’s list of badness include poor planning, failure to test at each level of development, failure to talk to customers or put prototypes and minimum viable products in front of them, failing to follow best practices, and ignoring warning signs.

Bird concludes with: “As a Program Manager or Product Owner or a Business Owner you don’t need to be an expert in software engineering, but you can’t make intelligent trade-off decisions without understanding the fundamentals of how the software is built, and how software should be built.” Which is completely contradictory advice, because I think you do need to be, or hire, someone who can deep dive into the engineering production pipeline.

The practice of the “five whys” says that asking “Why did this happen?” through five layers of reasoning will bring you to the actual root cause of the problem. There’s an old joke that states that the fifth why is always some variant of, “Because we were lazy.” The Wikipedia article linked to actually ends with an example of that!

But I think laziness is the second most common problem in software management. The most common problem is fear.

This is most obvious in the high-tech corridors of San Francisco and Seattle, but it’s true everywhere. We’ve created a perverse incentive structure where the best way to move up is to move on: the quickest way to get a raise in this business is to leave your job and move on to a different job. Poaching is widespread and pernicious.

On the other hand, managers are terrified of losing people. Hiring well is hard,and losing an employee means losing whatever institutional memory they had built up before they left. Along with the pressure of producing software, they have to know that their developers could flee at any moment. So managers are willing to let some things slide if it means they’re able to hang onto their “10X” developers: they’ll ignore standards and allow redundant, dead, unrefactored, cut-and-paste, poorly abstracted, globally defined, untyped (or worse, “Any” typed), uncontracted, YAGNI, lacking test coverage code. Managers who have been out of the coding loop too long or who are unsure of their position lack the backbone necessary to tell developers, “Make it better.” It’s not enough if the code “passes quality assurance” if QA doesn’t know the ranges each level of abstraction is expected to honor.

But this is the kind of code 10X programmers develop. It’s mostly junk: it’s held together with baling wire and its interactions exist only as a terrible map hidden away in the developer’s brain. It only becomes useful after 10 other developers have cleaned up all its rough edges and applied its cruft to more enviroments than the 10X guy’s laptop.

But because that code does what the spec says it should do, provided the “just right” environment and the “just right” amount of babysitting, upper management thinks he’s a genius and his line managers are terrified that he might leave. Everyone seems to think the 10 people actually making his code saleable are overhead.

It’s not enough if the code “sells well” in its first few months if future maintainers can’t read what’s later presented to them– especially after the original developer has left company. It’s not enough to have a VCS (Version Control System) if the manager doesn’t have or won’t enforce standards of communication and etiquette in commit messages, code review requests, or code review responses.

The problem is ubiquitous, and it’s exacerbated by the external realities of the peripheral price spirals caused by this environment: the insane housing prices in SF and Seattle create an awareness in the developers that the grass may be greener elsewhere, that affording that next residence or next toy is just a job hop away.

There is no job loyalty, sadly. The closest companies can do is offer stock options, in a golden handcuffs pattern that keeps only the most dedicated, or most desperate, or least ambitious. And keep re-offering them to valuable employees. (True story: my tenure at Isilon ended in 2008 with the mass layoffs of the recession and Isilon’s buyout by EMC. In March 2009 my Palm Pilot, which had been silently sitting in its cradle for years, beeped suddenly. Curious, I read the message: Your last options award from Isilon has matured. Submit your resignation. A little late, but I had cashed everything I could already.)

But if companies want to stop spinning their wheels, they need to get things right the first time. Yes, that means slow down. Yes, that means, plan ahead. But most important of all, have a spine. A culture of excellence requires a culture of courage.

Recent Comments