19Jun

“What Motivates You?”

Posted by Elf Sternberg as Uncategorized

I won’t link to the Orange Website for Hacker Bros, but a question asked yesterday has collided with a conversation I had had ealier this week. The question was: “What motivates you to do what you do?” I had a simple answer. These guys:

disgust-anger-fear

If you’re not familiar with the briliant Pixar film Inside Out, the film depicts an adolescent girl named Riley, and her internal struggle and emotions as she comes to grips with her parents’ decision to move to the big city. Those are three of  Riley’s five primary emotions, from left to right: Disgust, Anger, and Fear. (Not shown: Sadness and Joy.)

Almost everything I’ve ever done has been due to an emotional reaction, usually one of the three above. While I find my fiction writing to be a joy and a pleasure these days, three million words started with disgust: “Good grief, people, why are you liking that horrible Brady Bunch fanfic so much? It’s terrible! Anyone can do better than that. Let me show you.”

A lot of my smaller contributions to other open source projects have been due to anger. I was angry when my joystick didn’t work after getting Freespace compiled on Linux, and thus an obscure driver was born; I was angry when I couldn’t get my porn off Usenet, and thus the bugfixes to Python; I was angry when Evernote changed its policies, and thus my Enex ripper; I was angry when Delicious shut down, and thus my Delicious ripper. And so on.

The biggest emotional reaction I have, though, is fear. Fear that I’ll be obsolete someday, fear that as I get older, I won’t be so good they can’t ignore me, no matter how many grey hairs I’ve got, fear that my skillset will become irretrievably outdated if I stay in one place for too long.

I actively envy a lot of developers who find cool stuff to work on and really seem to get a thrill out of the long-term chase. I wish I did. I do get small frissons here and there, and there have been times where I have glimpsed something rare and wonderful, chased it down and admired it for what it was. I love learning new things.

Mostly I love how learning new things keeps the fear at bay. Which is probably not healthy. But it is what I’ve I got.

I’ve been struggling with the notion of theories and provers in the context of computer programming for a long time now. I was reading Peter Naur’s general concept of Programming As Theory Building and I don’t see how it connects to Type Systems at all.

In Naur’s thesis, there are three kinds of objects: (1) The world of physical things and events, (2) the world of mental objects and events (qualia?), and (3) the actual products of thought such as theories, stories, myths, tools, and social institutions. Programming, Naur says, is about the creation of objects of the third kind: the reification of thought products into processes. It isn’t enough to be intelligent; one can intelligently and skillfully perform all sorts of actions such as driving, typing, dancing and so forth; one must also have a theory, a context backing your actions such that you can meaningfully and coherently explain your theory, answer questions about it, argue on behalf of it.

In Naur’s Theory Building View of Programming (1985), the act of programming is the construction of a theory about how human processes can best be supported by one or more computer programs. This sorta coincides with Yourdon’s premise (1989) that one can’t begin to cybernize (Yourdon’s word) a process until one clearly understands the process he is seeking to enhance or replace. Prior to the Age of the Internet, Systems Analysis and Design was a serious discipline (I’m a Trained Cyberneticist™, kids, don’t try this without professional help!) in which the first step was to completely document the end-to-end product of some paper-pushing organization, and then try to figure out how to best enhance or replace that organization’s paper-pushing with a more efficient computerized version. It wasn’t enough to replace notebooks and carbons with keyboards and screens; sometimes, you had to replace whole systems with streamlined or adapted versions. We were taught that Analysis and Design was a holistic discipline, not just a programmatic one.

One place where I think Naur breaks down, and this may be due to the tools and processes available to him at the time, is where he says that he believes it is impossible to embed enough of the Theory (the outcome of the original team’s analysis and design) in any given program for it to be clear to other programmers without further input from the original team. The Theory is clear only in the team’s collective intelligence as they develop the program, and the program only articulates some mechanistic processes that support the theory.

I also believe that Naur falls down in the face of modern development because the disciplines of programming simply weren’t all that great in the 80s. Too much of it was dedicated to trickery to get more performance out of the system; the Theory embodied not just what the program had to do, but specialized assumptions about the hardware that would support it. Finally, programs today are built from tens, perhaps dozens, or even hundreds of libraries; I don’t believe we actually need a Theory for each library to competently use them in our productions.

I do believe we need a Theory for our own production, to explain why we need so goddamned many libraries, however.

Naur claims that few software development houses work according to a Theory. He may be right. There are tens of thousands of "IT shops" in this country, and most of them don’t really care. They just want to develop Yet Another App, their knowledge of the underlying technologies, be they HTTP, SSL, virtual machines, containerization services, and databases, is so tenuous as to amount to empirical daemonology: poke it and see if it pokes back. But that doesn’t matter, as CPU power is cheap and there’s enough scaffolding holding the whole leaking, clanking, steaming, whistling contraption to lurch forward like Howl’s Moving Castle, such that they don’t have to care. There are days when I feel like I’m working with a theory that is, at best, vague and hard to define.

As near as I can tell, the terminology of "theory" in Type Systems has some overlap to Naur’s: The programmer uses Types to constrain what the program does to fit within the Theory in the programmer’s head, and uses the Type Checker to assert that the Theory is complete and comprehensive, but I don’t think that the notion of "provers" in Type Systems has anything to do with Naur’s thesis.

Naur’s Programming as Theory Building is still relevant 30 years after its publication (and the year I started my CS degree program), but it describes challenges most of us no longer face, and was written at a time when the cognitive support tools were a desert wasteland compared to what we have now. It’s worth reading, and it’s worth supporting, because if you write software and you don’t have a clear theory about what the program will be and what it will do, you can’t write a good program.

So, I discovered a little miracle the other day. I’ve gone in heavily for Emacs org-mode, which is about 50% of what I want. It’s not terribly visible, and what I really want is something that is heavily web-enabled, but in the meantime it’s "good enough." Ever better, there’s a good Org Mode to-do manager for Android applications, Orgzly.

But what really makes it worthwhile is Syncthing, a "Personal Dropbox" that syncs up directories and folders just the way dropbox does. Everything is encrypted, and the bandwidth use is actually fairly low once the system is synced up. By running Syncthing on my laptop, my desktop, and my phone (and there is an Android app), I’ve finally got a set-up that gives me the power to manage my to-do lists with a reasonable front-end, a powerful syntax that I understand, and access from text editors, command lines, graphical UIs, and web-based interfaces.

The reason it’s not perfect, in my opinion, is that Org Mode is a sprawling utility that doesn’t do a good job of associating bookmarks to to-dos. And let’s face it, most of the time when we bookmark something, it’s associated with a personal project or task that we want to accomplish, even if it’s "Read this later for pleasure." It would be nice if Org-mode had tagging, and we could use the tagging ourselves. That doesn’t exist yet, and it would be fun if it did. Using at least limited machine learning to say, "This bookmark is associated with these projects" would be super-awesome.

Someday.

For now, though, I’m going to have to change my Wiki thing into an org-mode system, and then port my Delicious and Evernote archives to that system. Won’t that be fun?

10Apr

Posted by Elf Sternberg as Uncategorized

I realized this morning that, out of a team of eight people, there are only two dudes: me, and the guy who does the documentation. The project manager is a woman (who also writes code), the three other coders are women, the two QA personnel are women, and our UX designer and quality control is a woman.

The only thing that separates any of us from another is experience. I’m the oldest programmer, with a crazy amount of experience, and I tend to surge straight ahead for the most obvious solution, the one I have stored away in my archive of experience. “Always with the elegant code,” one said. “You use chain and lift way more than anyone else I’ve ever met. I’m not complaining, but sometimes I think you make us look bad.”

I hope not, because every one of them is a great developer. And I admit, I am sometimes lazy. I have a checklist that I sometimes fail to go through to make sure that even if I have met the acceptance criteria, I haven’t gone above and beyond and looked for non-obvious problems with the code. I’m not a “10x programmer;” I don’t believe they exist, and the ones who are sometimes claimed to be such leave behind a mess it takes 9 other “1x” programmers to clean up. I don’t want to be that guy. That’s part of the reason I wrote git-lint, to make sure I literally could not check in anything that doesn’t meet a certain minimum quality standard.

There isn’t a developer on this team who isn’t completely competent to do the job. Every single one of them produces fantastic software that meets or exceeds our quality expectations and does the task. Our work isn’t the most glamorous— we’re currently building an automated inventory management system. But it has to work, because people pay us lots of money for it.

Every workplace has had people who can’t do the work. In my thirty years as a software developer, I’ve seen it all: the one with the bad drug habit that got worse, the not-so-subtle drunks, the just plain lazy flakes. The sad fact is even those people were highly skilled, but they had emotional problems that held them back. Not a single one of those people was distinguished by their sex, though. Ninety-nine percent of the people I’ve worked with have been able to do the job competently and diligently. Almost everyone who has a knack for programming can do the work if they feel safe and justly compensated. It really doesn’t matter what color, sex, gender, or religion they are.

One day, some of my teammates will have had thirty years of great experiences, and be able to teach the next generation about elegance and diligence, the technical and the people skills. I know I could still use work on my “people skills!” (The ones with great people skills will probably be moved into management; I can’t say that’s good or bad, but I’ll be sorry to see them go.) Someday they’ll be able to just roll out the answer, because thirty years of experience will give them libraries of “known good” solutions and the habits of insight needed to apply them. Sexism, especially in software development, just confuses me. Brains are brains. The women are as good as the men. I would even describe several of either sex as damnably brilliant, and worth learning from. I’m just… what is it about some dudes that they can’t accept that?

Last year, I worked my way through Lisp In Small Pieces, implementing several variants of the Lisp engine Queinnec described, most of them in Coffeescript. I’ve decided this year that I’m going to continue building out my language experience and write a scripting language. I’m still working out the details, but I’m working my way now through Scott’s Programming Language Pragmatics while reading a lot of extra stuff on the side.

One of the gems I found recently was a copy of the Smalltalk-80 Implementation Guide. It is, to say the least, a fascinating book. It introduces a virtual machine in the context of programming in 1980, which was 37 years ago, and the assumptions the authors make as to what I’m expected to know as a potential developer of a Smalltalk environment fascinated me.

The book describes a stack-based, tree-walking virtual machine. As Smalltalk is a purely object-oriented language, the tree is completely reified by the OO environment; you choose an object and a function to start, and the interpreter walks that function’s statements and subroutines until execution ends at the end of the start function. Methods are dispatched according to a lookup table, and primitives are encapsulated in a large switch() statement.

What fascinates me about reading this is that I know, at least theoretically, so much more about writing VMs. The VM described here is a nightmare of cache misses and branch prediction failures; fully a third of a modern CPU’s efforts will be going into loading, evaluating, and then disposing of anticipated operations that the VM will throw away unused. There’s no mention at all of JITting the program’s functions (converting long sequences directly into machine code) or even just its spine (writing the sequence of primitives as indirect, or even direct, unconditional branches such that the CPU’s branch predictor works most of the time; direct is cool but requires the primitives themselves be copied and modified for direct uncoditional branching back to the spine).

Since in Smalltalk almost any object could be a launching point and since relationships between objects can be changed with the swipe of a mouse, the speed with which we’d have to recalculate any given set of relationships would have to be a priority. It would have to have Go’s internal speed of recompiling, only to put the results into an executable anonymous mmap()’d space, and then map everything than needs it to be able to find that code.

A modern Smalltalk VM could be a marvel to use. It would be a nightmare to write.

Be that as it may, choosing to write programming languages today, even ones on top of a system language like C or Rust, is to confront an embarrassment of riches. There are so many interesting ways to do something now. We have JITs! We have Algorithm W! We have concurrency problems like you wouldn’t believe! 37 years ago, they had two options: compiled code, or a stack-based switch statement on top of compiled code. The writers assumed you knew more about those two ways, and absolutely nothing about all the other stuff.

Joe Wright has an excellent blog post about Anti-If Patterns that I naturally believe everyone who write programs ought to read, but there’s a detail that’s missing that I think is absolutely crucial to getting anti-if programming correct.

Types.

Yeah, you can yawn and leave now, if you don’t care. But I’m with Guido von Rossum on this, that once your software reaches a certain size the only way to stave off chaos is to demand more of the programmer up front, and that demand comes in the form of requiring programmers to understand the shape of the data, and to make sure that the pieces being passed between units of code fit perfectly.

Wright glances off the issue in his second pattern, “Use polymorphism instead of switch().” This is a common piece of advice, although it really only works when you have more than one switch statement; at that point, your switch statements are collections of methods that apply to different objects.

It’s his first (Boolean Params) and fourth (Conditional Expressions) patterns where he falls down a little. The most critical issue in both of these is the shape of the data. “Boolean Params” is just “Conditional Expressions” written as a lookup table. If we play the classic programmer exercise of zero, one, or many, a lookup table is a conditional expression taken from the “one” state to the “many” state. It is therefore absolutely critical to put your foot down and state for the record, In any conditional expression, for all sub-expressions, all left-hand values must share the exact same type, and all right-hand values must share the same type.

If this isn’t the case, you’ve created a way to sneak if back into the system, with separate code paths that must be unit tested. And down that road lies madness and unreliability.

07Jan

The DIM Epiphany

Posted by Elf Sternberg as Uncategorized

When I was 13 years old, I laid my hands on my first true computer. It was a DEC PDP-11/44, later upgraded to a PDP-11/70, running RSTS/E and the BASIC/PLUS shell environment. Live every teenage boy, I started out by writing video games. Bad ones, naturally, since all we had was the ANSI escape codes on a black-and-white monitor. And while my code kinda worked, there was a statement in the BASIC language that escaped my insight. DIM.

DIM A(100)

I really didn’t “get” what this statement meant or why it was necessary. Indexing didn’t make any sense to me at all. There wasn’t anyone to explain it to me, and the magazines and books I had weren’t helping. It was literally a year before I finally had an epiphany: DIM was creating a row of boxes in which to store things, and the index numbr let me say which box I wanted. Variables were no longer just storing things; they were storing things that pointed to other things.

I had to visualize it before I understood it.

Djikstra once said that “As potential programmers, students who have had prior exposure to BASIC are mentally mutilated beyond hope of reconstruction.” While that may sound a little harsh, I suspect there’s some truth to it. For years that need to be able visualize what a program was doing before I could trust that it was correct hampered me from moving forward with a lot of different and interesting projects. To this day, I still feel like I need concrete, working examples of something before I understand what it’s doing.

It gets worse when I’m working with highly abstract material. Learning to trust that singly linked lists do what they’re supposed to do was actually hard for me. It took more than a dozen tries, and to this day if we get past five links of responsibility I have to find a way to abstract the thing I’m working on into a singular concept, beating it into my brain that it’s okay, that chunk of code is going to do what I want it to. I want to know down to the electrons flowing through the wires what’s going on with my code.

A friend of mine, a highly accomplished mathematician in his own right, is fond of the notion of notation as a tool for thought, and that getting past the need to visualize is a critical stage in mathematical thinking. That’s probably true, and I fear I’ll never get past that critical stage. Indeed, at my age, it may already be too late.

That’s not going to stop me from trying, of course. I am not going to treat programming as applied demonology, which is what most developers do these days.  Giving is just not in my nature.

I wrote an example of Bob Nystrom’s “Baby’s First Garbage Collector,” which I’ve been wanting to implement for a while in order to understand it better. To make the problem harder (as I always do), I decided to write it in C++, and to make it even more fun, I’ve implemented it using the new Variant container from C++17.

I’ve never written a garbage collector before. Now I know what it is and how it works.

The collector Nystrom wrote is a simple bi-color mark-and-sweep collector for a singly threaded process with distinct pauses based upon a straightforward memory usage heuristic. That heuristic is simply, “Is the amount of memory currently in use twice as much as the last time we garbage collected?” If the answer is yes, the collector runs and sweeps up the stack.

Nystrom’s code is highly readable, and I hope mine is as well. Because I used Variant, my Object class has an internal Pair class, and then the Variant is just <int, Pair>, where “Pair” is a pair of pointers to other objects. The entirety of the VM is basically a stack of singly-linked lists which either represents integers or collections of integers in a Lisp-like structure.

The allocator creates two kinds of objects, then: pairs, and lists. A pair is created by pushing two other objects onto the stack, then calling push(), which pops them off the stack and replaces them with a Pair object. The VM class has two methods, both named push(), one of which pushes an integer, the other a pair. Since a pair is built from objects on the stack, the Pair version takes no arguments, and since C++14 and beyond have move semantics that Variant honors, Variant<Pair> only constructs a single pair. Pretty nice. I was also able to use both lambda-style and constructor-style visitors in my Variant, which was a fun little bonus.

In the end, this becomes a pair of linked lists with different roots; one pair can orphan the lists, the other can’t. We traverse up the spine of the stack, following each scope’s list and marking the objects as found. We then traverse the other list and, for every object not marked, we get a pointer to it, collapse the list around it, and delete it. Very simple and elegant.

I hope, eventually, to move onto tri-color, multi-threaded garbage collectors someday. According to Nystrom, this garbage collector is the actual algorithm used in early versions of Lua, so it’s not a toy.

I have included the header files for the Mapbox version of Variant since the C++17 committee’s standards haven’t quite reached the general public and the Variant implementation is still a subject of some debate. This implementation looks straightforward enough and is a header-only release. It works with both GCC 4.8.5 and Clang 3.8, and that’s good enough for me.

The Mapbox variant is BSD licensed, and a copy of the license is included in the Include directory.  The source code is here.

BUILDING:

From the base directory of the project:

mkdir build
cd build
cmake ..
make

And you should be able to run the basic tests. It’s just one file. Unfortunately, I was simpleminded with the include paths, so it can’t be built anywhere but from the base directory without fiddling with the CMakeLists.txt file.

At work, I’ve earned a bit of notoriety for being a code style commisar. I’ve been part of the group that from the bottom has been pushing people ot use flake8 and eslint, and I even wrote a tool to make polyglot testing easier. My latest kick is types. After experimenting with them for two years, I’ve decided that they do make a difference, a huge difference, in the quality of code. They won’t catch every bug; today I discovered that a documented defaulting option hadn’t actually been included in the parser I’d written. Yay for unit tests. But also: yay for types. By switching to types, and actually using them, I’ve discovered where my thinking was weak and vague. Type checking is a constraint. Great art is always made within constraints.

It’s still not enough. You have to have good taste. You have to have an aesthetic sense for how the code communicates. You need to design everything. Design adds clarity. It’s not enough to make it work. I tell people you need to design and write, then redesign and refactor, every project, and you need to book time into doing so. Function and variable names need to be clear and communicative– keep a thesaurus. in your bookmark bar! If you’ve got an excessively clever bit of code, break it into smaller units and name them with variable and function names. If your company has a preferred organization for classes and modules, use it consistently. If not, use an organization that best tells a story. Use nested functions to isolate unique functionality. Do the same thing with the filenames in modules.

This stuff takes practice.

There’s a saying in software design that I have come to loathe: “Write your code as if the guy who has to maintain it is a violent psychopath who knows where your desk is.” I heard a guy up from the San Francisco office say this the other day and my reaction was, “Nobody works well when they’re afraid. If you want great software, you have to have, like, Buddha-levels of respect for the next person.” I then went into a (small part) of my rant above.

I guess I must have been rather forceful and passionate in my defense of good taste in software design, but he next said, “Why do you care so much?” I found the perfect formulation:

Programmers work at the limits of their understanding, with their brains full. Maintaining code is harder than writing it anew because you have to understand what the original writer was trying to do while they were writing code at their limits. If you have respect for the next person who has to modify your code, including the skills they’re likely to have, then you will write software that’s as clear and easy to understand as you possibly can. This means that any problems, any future extensions, and any customer requests can be met in hours instead of days, or days instead of weeks. The maintainer is happy because they aren’t frustrated by code ugliness or made to feel stupid by excessive code cleverness. Instead, they can fix the problem easily so they’re happy. Happier maintainers get stuff to market faster, and that makes customers happy. Happier customers buy more of your stuff. That makes stockholders happy. Happier stockholders reward you with bigger raises.

Aside from the money, I’m happier when I feel pride in my workmanship. Don’t you?

In my ever-shrinking spare time, I write stories. When I’m writing a particularly long story, or dabbling in the 300-odd episodic space opera I’ve been working on for twenty-five years or so, I have to read and re-read the story to make sure that every plot thread of the story has been closed in a satisfying manner, every macguffin has been stowed away, and every character’s character has been fully revealed and shows consistency throughout. Even then, when I go back and re-read some of my work, I see lines that were supposed to lead somewhere, but didn’t, and sometimes a character says something about an event in the past, but the actual scene being referenced has been cut out for whatever reason.

Even in the best books, while the main plot remains resolved, a sub-plot might not actually be completely hooked up. The most famous of these is in Raymond Chandler’s The Big Sleep: we still have no idea what happened to victim Owen Taylor.

Human beings can forgive an omission like that if something else about the book is good. For Chandler, it’s all about style, and the style he invented, private detective noir, is breathtaking in its originality.

Computers, on the other hand, are spare and unforgiving. If a user forgets to hook something up, a crash is inevitable. It’s only a matter of when and where.

I recently praised MyPy, a new type hint system for Python that, I claimed, eliminates an entire class of errors while writing Python.

I think it’s important to emphasize how important I think MyPy is, because here’s the horrible truth: computer programmers have no idea what they’re doing.

Imagine, if you will, a newly installed private inventory management system (PIMS) for a large company. Different offices will send you spreadsheet of data, rows of inventory, and column headers to describe what the rows mean. Meaning is the most important thing here; computers are just glorified calculators, it’s human beings that apply meaning to what they’re doing.

Without the column headers, though, a row of spreadsheet data is meaningless. It’s just numbers and names. A human might be able to guess that the column with entries like “New York” and “Boston” is cities, but what do all those numbers mean? And without specialized software, the computer doesn’t care about cities at all; they’re just strings of letters.

The PIMS system lets you upload the spreadsheet, and then you, the human, apply meaning. “That’s a computer.” “That’s a piece of software.” “That’s a chair.” Programmers tend to abstract things to their basics. “Computers and chairs need to be shipped; software can just be sent via email; real estate can’t be transferred at all.” That sort of thing. Inventory may have location, and ownership: “That is Bob’s chair in the Chicago office.”

So here’s the horrifying truth: most programs, internally, don’t apply any meaning to what it is they’re handling. This means the programmer didn’t apply meaning. The programmer had meaning in his head, but was so busy getting from input to output, applying that meaning and rules along the way, trying to hit a deadline, that encoding that meaning in the code gets lost. A spreadsheet starts out as rows of hand-labeled columns; it ends up in the database as entries in various hand-labeled tables of rows and columns. In the middle, it lives as blobs.

Blobs. That’s it. A Python list or a Javascript array is arbitray: it can contain lists of anything: numbers, strings, other lists, a mix of all of the above. The same is true of the dictionary. They’re just “objects”; they have no meaning. The programmer writes functions that handle ListsOfShippableThings, which in turn calls functions that handle OneShippableThing, which in turn call a TransportCompany, and so on. But what he passes them is a List. He could accidentally call ShipShippableThings with a List of RealEstateThings. That might be found in testing; it might be found when someone tries to ship something; it might actually work, in that an InventoryItem is marked to be shipped even though it’s a square city block in downtown Manhattan!

Almost all web software is written this way. We call it “Duck Typing:” if it walks like a duck and quacks like a duck, it’s a duck. If a RealEstateThing is an inventory item with a location, the ShipShippableThings function might say, “It’s an inventory item at a place, yep, we can ship that.”

The beauty of MyPy and Typescript is that, with good taste in naming things, and proper training, you can’t write software where you try to ship a RealEstateThing; long before the code runs, your editor or repository checker will say, “You have code here where you’re passing a RealEstateThing to a TransportCompany through ShipShippableThing. That doesn’t make sense.”

And it doesn’t. But when you’re a programmer, it’s easy to forget, in the hundreds of things you might be keeping track of, you might just try to run everything through “CheckIfNeedsShipping” code, never realizing that your list contains things that can’t be shipped.

There are all sorts of examples. Grocery stores have things that can’t be eaten and don’t spoil; pharmacies are full of things that can’t be injected. Constraint is one of the most powerful ideas in computer science, and when we wrote software that freed us from the constraints of having to tell the computer how much memory we needed for an object, we also lost the constraints we had on having to describe that object clearly.

I was very disappointed when I read Eric Elliot’s You might not need Typescript (or Static Types), because he claims that Typescript, and its constraint checking, slowed him down and didn’t reduce bug count. He talks about how typing is “distracting”; his developers would rather just use blobs of text that know exactly what they’re passing around. He says unit tests are a great way to know if his code is working, but that’s true only if he tests the right things.

I’ve been writing in Javascript and Python for twenty years now. (Not kidding about that, either. Seriously. In 1996 I’d been working in Perl for four years professionally already.) Nothing, and I mean nothing, is more exciting, more useful, and more indicative that the software industry is finally growing up than the popularity of static typing for these languages.

Elliot describes duck typing as “checking that looks at the structure of a value rather than its name or class.” In older, rigid languages like C and C++, a similar thing is called Structural Typing; it doesn’t matter what I, the programmer, claims that thing is; all that matters is that it has the right layout in memory. If coincidentally the “PizzaDeliveryGuy” and “LaunchNukesOrders” have similar layouts in memory (Maybe a launch code is the same number of bytes as a phone number!), well…

What MyPy and Typescript do is called Nominal Typing. Computer functions are about intent. “I intend to order a pizza.” We teach how to name functions because we want to clearly communicate intent. The same thing should be true of the data we work on: it should describe what it encapsulates.

I have no idea how big Elliot’s programs are, or how much time he spends trying to figure out why something crashed. Nominal typing has reduced the amount of time I spend it on by half. To me, that’s a strategic benefit no amount of “go fast and break things” will ever match.

Subscribe to Feed

Categories

Calendar

June 2017
M T W T F S S
« May    
 1234
567891011
12131415161718
19202122232425
2627282930