Pandastrike has a really good article called Facebook Relay: An Evil And/Or Incompetent Attack On REST, in which the author basically takes Facebook to the woodshed for not understanding REST, trying to break REST, and generally being your classic embrace / extend / extinguish (or etouderie, a beautiful word that has sadly fallen out of the English lexicon) big company imposing its will on everyone else.  As a graphQL fan, I wanted to like Relay, but every time I played with it my principle reaction was, “Okay, what is this really for?”  Pandastrike goes on to say that it’s good for only one thing, namely social networking data at the massive scale Facebook faces.

But, Pandastrike make one really terrible faux pax of their own in the article.  They make a point of quoting Ray Fielding, but then in the section on REST endpoints and type safety, write:

Although JSON Schema is not part of HTTP proper… if you use content types correctly, and also use JSON Schema with JSCK, you get strong typing over HTTP.

This is true, as far as it goes.  But it misses two incredibly important parts of Ray Fielding’s work, and makes me suspect their intentions.  JSCK, you see, is a product produced by Pandastrike.  And Fielding himself has said that doing REST with JSON is incredibly hard.  So hard, in fact, that the original work in REST mentioned that the transfer of representational state automatically implied hypertext as the representative of state transfer.  JSON is a terrible tool for hypertext.  You know what’s a great tool?  HTML: Hypertext Markup Language.  It’s not just for the browser and the page itself, it’s for every transaction that you commit between the browser and the server, and it carries with it all the metadata needed to make sense of the content.  Even better, unlike JSON, HTML has its own type checking mechanism that is part of the HTTP/HTML dual standard: its DTD, or Document Type Definition.  You’re not required to use the whole HTML standard, and you can even use XML with a stripped-down version that still comes with a DTD.

Pandstrike goes on about Facebook’s raking attack on a behavior scheme that’s been around for, oh, call it ten years.  But HTML and DTDs have been around for twenty years.

I’ll be fair: Working with HTML and XML on the browser is painful compared to JSON.  It uses more CPU to render and it takes more tooling to program correctly.  Nobody does it that way.  But to ignore it and imply you have a magic solution to an unsolved problem is to be as deceitful as the people you’re criticizing.


This is my very simple secret weapon in doing complicated data transforms on the client side, which I do a lot of when I’m working with Splunk data.

_ = require('underscore');

    makerail: function(predicate) {
        if (_.isUndefined(predicate)) {
            predicate = _.isUndefined;
        return function() {
            var args = arguments;
            return function() {
                var result = args[0].apply(this, arguments);
                if (predicate(result)) {
                    return result;
                for(var i = 1, l = args.length; i < l; i++) {
                    result = args[i].call(this, result);
                    if (predicate(result)) {
                        return result;
                return result;

_.mixin({rail: _.makerail()});

In its default configuration, rail() calls each argument in the sequence, much like compose(), passing to the first function in any arguments to the resultant composed function call, then passing the result of that function to each subsequent function. It’s basically like the arrow operator in Lisp, performing each step in a left-to-right fashion, rather than the right-to-left of underscore’s compose function. However, it also *terminates* the moment any function produces undefined as a result, shorting out the composition and returning undefined right then.

It’s possible to call makerail with an alternative predicate:


var railway = _.makerail(function(result) {
    return (result instanceof Error);

In this example, makerail() is describing what F#calls “railway oriented programming”, whereby when any function returns an object of the exception class, all the other functions are immediately skipped without generating an exception and all of the performance or flow-control headaches of invoking the exception handler.  It’s actually rather nifty, and encourages a much more holistic approach to dealing with long data flows.

The past two weeks I have volunteered a couple hours of my time at the local high school to teach a small class of the kids in the fine art of HTML and CSS. I have only one hour of interaction with the kids each week, but I spend about two to three hours beforehand prepping materials and getting ready.

The first class was a blitzkrieg of ideas. A bit of “A website is a collection of web pages around an idea” and “A webpage is a chunk of HTML filled in with other stuff to give you one view of the idea” and so on. A map of a fairly complex web production environment: The “business thing”, the business logic, routers, databases, HTML, CSS, Javascript, Canvas, SVG, WebGL, etc. etc. etc. The number of websites I’ve built where “the business” was a completely separate server with a simple frontend written in Django, Catalyst, or Express shows the maturity of the model.

And then we hit the wall. As a demo, I wanted them to all open up a file, edit an eight-line HTML file (HTML, HEAD, TITLE, BODY, PARAGRAPH, CONTENT, plus closures), save it, and view it in the browser.

The only tool these kids have for this is a bunch of Chromebooks. Most of them can’t afford laptops. The school provides them the Chromebooks, and their own Google Drive locations and accounts. So that’s what we had to work with.

Problem number 1: These kids have no idea what “plain text” is. Every tool they’ve ever used comes with options to pick a font, do bold, do italics. When I asked them how the computer “knew” to use bold or italics, they shrugged. I had to explain that the zeros and ones saved to their storage contained extra zeros and ones to describe the decoration, the bolding, the italicizing, the font selection. We were going to add the decoration back ourselves, using HTML. But before we could do that, we needed to use the most simple storage format there was, the one with no decoration, the one where every character you saw was the same the one you saved, with no additions, to annotations, no decorations.

The ease and convenience of RTF and other “printable” or “web-ready” formats has completely ruined these kids’ understanding of what actually happens underneath the covers.

Problem number 2: These kids have no way to correlate files to URLs. The lack of a traditional storage medium, and the introduction of Google Drive, means these kids have no mental map for associating a “web location” with a “filesystem location”. Everything is seemingly ad-hoc and without a real-world physical reference. This is probably the lesser problem, as storage is already very weird and about to get weirder, and we’re all just going to have to live with that fact.

The second class went better, and I went much, much slower. This is a hands-on class where I lead them through a couple of exercises and help them figure out weird things they can do with HTML. We figured out work-arounds for Google Drive and practiced our first, basic HTML, like headers, lists, and so on. And I gave them their first styles. They had fun figuring out random colors that seemed to work for their backgrounded objects.

There’s that Simpsons episode where some adult male says “Am I out of touch? No! It is the children who are wrong!” Well, maybe I am out of touch, but it really seems to me that Chromebooks may be fine for accessing the World Wide Web, but as a tool for developing on the web, they’re more a hindrance than a help.

Seeing as it’s January, that means that we go through many accounting phases about what happened last year. Most of the ones we go through publicly are ones about how we spent our time: did we work out enough, write enough, study enough, love enough. Others we Americans tend to go through with a deep sense of reserve and privacy, mostly about money.

Last year I made what is, to me, an insane amount of money. Far more than I ever thought I’d be making in any given year. And it’s more than the year before that; in fact, it’s been steadily going up every year since the 2008 recession. Even after adjusting for inflation, I’m still making more per year than my father did, which I have to say is utterly mind-boggling, since he was a radiologist, a pioneer in nuclear medicine, and a real estate mogul all at the same time.

Yet, I’ve never felt the connection between work and reward feel more tenuous.

I’m currently in a large infrastructure position where, nominally, I was hired for my skills as a software developer, yet I now joke that I write code during the commute because they don’t let me write it at work. Instead, I manage configuration files. I worked on fleshing out a platforming initiative for a massive chunk of network monitoring software; that platform is now mature enough that the skills I initially brought to the table are no longer needed. The real skills I spent twenty years acquiring are now being allowed to decay while I fiddle around the edges of an impressively large but intellectually dull enterprise software product.

On the other hand, because it is a network monitoring tool that helps prevent enterprise-scale service failure, finacial loss, and outright fraud, there is an unbelievable amount of money sloshing around the sector, and my company has seen fit to reward me repeatedly with bonuses, raises, and stock options.

And yet, I know I don’t work nearly as hard as the average apple picker in the agricultural regions just east of where I live. I am not as ambitious or go-getter as many of my co-workers; I’m consciously on a daddy track and I’m not going to sacrifice my family’s time to my employer. I do my job, hopefully well, and go home at the end of the work day. The maturity and prosaic nature of the project, I confess, leaves me cold with desire to push the state-of-the-art. (This is the flipside of my time at IndieFlix or Spiral Genetics, where I worked like a dog and put in evenings because the project was flippin’ cool.)

I really don’t have ambition to “maximize shareholder value” except to the point that I’m currently a shareholder myself. I have an ambition to make the world a better place. Every job I’ve had of the first type paid excessively well; every job of the last type was inspiring and made me feel good about myself.

When I read about that weird Silicon Valley meme that “we work hard, so our rewards are commensurate with what we do,” I have to shake my head and wonder: really? Canada’s Micronutrient Initiative’s costs about $200 million, and has prevented almost 400 million cases of life-threatening birth defects in India, Canada, and North Africa; Candy Crush is worth $7.1 billion, and I doubt it’s developers actually work as hard as the people hauling sacks of iodine crystals through the third world’s back roads.

The disconnect between effort and reward has never been as stark or as absurd as it is today. My experience is a microcosm of that disconnect. I’m happy to do my job, and happy to get paid to do it, but I can’t help but feel that there’s something very off about the relationship between the two.


Interview Question

Posted by Elf Sternberg as Uncategorized

I won’t reveal where or when I got this question, but it always amused me.  At the time, I answered it using Underscore and Coffeescript, which the interviewers allowed I was going to have access to… but here’s a pure ES6 solution.

The problem, simply stated, was “write a function that sums two polynomial equations and prints the results.”  They defined the format for the input this way:

// 5x^4 + 4x^2 + 7 
// 3x^2 + 9x - 7
var e1 = [{x: 4, c: 5}, {x: 2, c: 4}, {x: 0, c: 7}];
var e2 = [{x: 2, c: 3}, {x: 1, c: 9}, {x: 0, c: -7}];

They were kind enough to let me code on my keyboard.  My answer is rather dramatic.

// Reduce any list of equations into an array of maps of exponent:coefficient
var eqns = [e1, e2].map((a) => a.reduce((m, t) => { m[t.x] = t.c; return m; }, new Object(null)));

// Find the largest exponent among all the equations
var maxe = Math.max.apply(null, => Math.max.apply(null, Object.keys(a))));

// For the range (maxe ... 0), for all equations, sum all the coefficients of that exponent, 
// filter out the zeros, sort highest to lowest, create string representations, and print.
        Array.from(new Array(maxe + 1), (x,i) => i)
        .map((exp) => [exp, eqns.reduce(((memo, eqn) => memo + (eqn.hasOwnProperty(exp) ? eqn[exp] : 0)), 0)])
        .filter((e) => e[1] != 0)
        .sort((e) => e[0])
        .map((e) => e[1] + (e[0] > 1 ? 'x^' + e[0] : (e[0] == 1 ? 'x' : '')))
        .join(' + '));

The interviewer just stared at it, and stared at it, and said, “I’ve never seen anyone solve that in three lines.  Or that fast.”

I shrugged.  “It’s a straightforward map/reduce of the relationship between exponents and coefficients, removing any factors that had a coefficient of zero.  This seemed the least buggy way to do it.  The riskiest part of this equation is the mapping back to string representation.  The nice feature of this function is that if we generalize the first line over an arguments array, it works for any number of equations, not just two.”

He agreed.  They ultimately didn’t hire me.  I had a friend there, and he said, “They really liked you, but it was pretty clear you were already bored where you were and moving from one infrastructure job to another wasn’t going to change that.”  Sad but true.



Lisp In Small Pieces, Chapter 5: The Storage Story

Posted by Elf Sternberg as Lisp

One other thing about Lisp in Small Pieces chapter 5 jumps out at me: the storage story.

In the interpreter written for Chapter 5, some things are cons lists (most notably, the expression object you pass into the interpreter), and some things are lists, but they’re not built with car/cdr/cons.

In chapter 3, we built an interpreter that used full-blown objects, in which each object had a field named “other” that pointed to the next object; when looking up a variable or an unwind point, the search was an explict call: starting with the latest object, a search would begin down the chain for a match and, when found, would trigger either a memory retrieval or a continuation, at which point the interpreter would resume with the relevant memory or continuation. Each object had a “failure” root class that would throw an exception.

In chapter 5, it gets even more functional. Chapter 5 tried to define everything in the Lambda Calculus, which allows for closures, but doesn’t by default support objects. But Quiennec really wanted to teach about allocation issues, especially the boxing and unboxing of values, so to make that point, he created two structures: one represents variable names that points to indexes, and one represents an indexed collection of boxes. Lookup represents the Greek equation σ (ρ ν), which is basically that the environment knows the names of thing, and the store knows the location of things.

But in order to be explicitly literal, Quiennec goes full-on. Both environment and store are represented the same. He creates a base environment that looks like this:

ρ.init = (ν) -> "Variable name not found."

and then when we add a new variable name to the stack, we write:

(ρ, ν, σ, κ) -> (ν2) -> if (ν2 == ν) then κ(σ) else ρ(ν2)

. In this case, we call a function that creates a function that, in turn, says “If the name requested matches the name at creation time, return the stored store point (actually, continue with it), else call the next (deeper) environment, all the way down the stack until you find the thing or hit ρ.init”.

It’s a really cheesy ways of emphasizing that you can do Lisp in a full-on Lambda Calculus way, but you probably shouldn’t. It’s also completely dependent upon the parent environment to reap memory when you’ve examined the tip of an expression and have retreated back toward the base of the expression tree to proceed down the next expression.

Lessons here are about the Lambda Calculus, and about memory management. In the latter case, how hard it’s going to be if you want to do it the way the big boys do.  Garbage collection is hard.


Lisp In Small Pieces, Chapter 5.

Posted by Elf Sternberg as Uncategorized

I lied when I said I’d completed Chapter 5 of Lisp In Small Pieces. I went back and re-read it, and realized that I was going to have to do the exercises in the chapter if I wanted to understand what was going on well enough to get through chapter 6.

Chapter 5 wasn’t nearly as hard as I’d thought it was going to be. It also wasn’t much fun. No new interpreter came out of it. Instead, what I got was the same interpreter, with yet another layer of indirection.

Quinnec is making a point in chapter 5. “If you’re going to present at OOPSLA, you’ll need to know how the big boys write.” And the big boys write in Greek. The chapter is about “the meaning of a program,” and turns the core switch statement into a quasi-readable “the meaning of this expresssion is the meaning of this expression when…” with pattern matching for conditionals, abstractions, and sequences. The meanings themselves become repetitious definitions of expression, environment, memory, and continuation.

Oh, yeah, did I mention that absolutely everything in this is in continuation-passing style? Madness. It made sense, when I did it, but it was painful to work through all those changes and make it all work.

There’s a giggle where Quinnec explains that Greek is used because each letter becomes a shorthand for something: an addressing scheme, a naming scheme, a continuation, an expression, so it’s possible to fit the entire language definition on one page, leaving you nine pages to explain whatever it is that’s unique about your language. “Greek letters are used because most programming languages are limited to the ASCII character set.”

Obviously, this book needs to be brought into the 21st century. We have unicode now. I was able to copy the Greek verbatim into my comments, e.g. f[y → z] = λx . if y = x then z else f(x) endif.  Note the lambda and the other unicode characters supposedly out of my reach. Not only are they accessible to me, I’ve permanently encoded them into my keyboard. (Had to use that Windows key for something, after all.)

Translating the Greek into Scheme, and then into Coffeescript, my target language, was fun. No, really. When it was finally working, it was kinda nifty to see that it did in fact work. You end up building these enormous function stacks of “meaning,” at the bottom of which is the print command, which translates to “print what that all meant.” At that moment, all the functions built up trigger in the right order, the AST order, and the result is delivered to your screen. It’s big and ugly and weird, but it works.

Chapter six is about making that all go faster. But I needed a chapter 5 interpreter before I could begin.

It’s been two weeks since I finished chapter 5. The start of the school year for my daughter, and other life matters, have intervened. I’ve also been head deep in learning something esoteric for work, so I’ll have go back and review that.


Using SplunkJS with SimpleXML Panels

Posted by Elf Sternberg as Uncategorized


Splunk’s SimpleXML is an XML file format to describe a custom dashboard with searches, inputs and panels. There are a number of fantastic resources for building them, but I recently encountered an interesting problem. That link also discusses SplunkJS, a Javascript library that allows users to customize searches and visualizations far beyond what SimpleXML allows.

SplunkJS is usually used with raw HTML and CSS, but can be pulled into a SimpleXML file by using the script attribute in the SimpleXML opening <dashboard> or <form> tag. It’s easy to make a SplunkJS search and attach it to a SimpleXML visualization; it’s not so easy to make a SimpleXML search and attach it to a SplunkJS visualization. This document shows you how, and shows you how to fix a peculiarity that arises from creating a well-organized ecosystem of panels and dashboards.

In later versions of Splunk, SimpleXML has a new attribute for <panel>, ref, which allows you to define a panel in a single file and drop it into a number of different dashboards without having to cut-and-paste the panel code. In the process, SimpleXML mangles the names of searches and visualizations, and so finding and manipulating those searches has become difficult.

This example uses the Splunk Linux TA (Technology Add-on), so you should download and install that. What data you use isn’t really important. For our example, though, we’re going to do is create a single dashboard with a single independent panel that shows the list of processes running on a host, find that panel, find its search, find its title, and modify the title with the name of the longest-running process.

After installing Splunk (here’s the free version of the Enterprise Edition, limited to a half-GB of data per day) and getting it up and running, click on the App icon (the gear symbol) on the left sidebar. On the Applications list, click on “Create a New App”, and provide it with a name, a directory slug, and a version number.

Now it’s time to fire up your editor. We need to create three things. A dashboard, a panel, and a javascript file to perform the magic.

Literate Program

A note: this article was written with the Literate Programming toolkit Noweb. Where you see something that looks like <this>, it’s a placeholder for code described elsewhere in the document. Placeholders with an equal sign at the end of them indicate the place where that code is defined. The link (U->) indicates that the code you’re seeing is used later in the document, and (<-U) indicates it was used earlier but is being defined here.

The Dashboard Files

The Dashboard file is simple. We just want to pull in a panel. This goes into APP_HOME/default/data/ui/views/index.xml. Here, APP_HOME is the path to the directory slug where your app is stored. I install Splunk in /opt and I named my example “searchhandle,” thus the path is /opt/splunk/etc/apps/searchhandle/default/data/ui/views/.


<dashboard script="title.js">
  <label>A Dashboard with a portable Panel and a Managed Title</label>
  <description>A simple demonstration integrating SimpleXML and SplunkJS</description>
    <panel ref="cputime" />

The panel file is also simple. It’s going to define a search and a table. It goes in APP_HOME/default/data/ui/panels/cputime.xml. Note that the filename must match the ref attribute. I’ve limited the search to the last hour, just to keep from beating my poor little laptop to death.

The indenting of the query is a little odd; this is the best compromise I have found between making the search readable in the XML, and making it readable if you examine it with Splunk’s search tool.


    <title>Long-running processes</title>
    <search id="cputimesearch">
      <query>index=os source=ps | stats latest(cpu_time) by process 
| sort -latest(cpu_time)
| rename process as "Process", 
  latest(cpu_time) as "CPU Time"

The <dashboard> tag in our dashboard file has a script attribute. This is where we’ll put our logic for manipulating the title of our panel. It’s annoying that we have to put our script reference in the dashboard and not the panel. It’s possible to have a file named “dashboard.js” which will be loaded for every XML file in your app, and then have it selectively act on panels when they appear, but that seems like a half-hearted solution to the problem.

The Javascript

Javascript files go in the APP_HOME/appserver/static/ directory. I’ve named ours title.js.

Splunk uses the require facility to import files. In the prelude to any SplunkJS interface, you must start with the ready! import, which doesn’t allow the contents of this file to run until the Splunk MVC (Model View Controller) base library is loaded. We’re also loading the searchmanager and two utility libraries: underscore and jquery, both of which come with the SplunkJS UI.

The one thing we’re most concerned with is the registry, which is a central repository where all components of the current Splunk job’s client-side operations are indexed and managed.

The file’s outline looks like the below. Understanding is best served by reading the references from bottom to top: wait for the search, find the search, listen to the search, do something when the search triggers.


], function(mvc, searchManager, _, $) {

    var registry = mvc.Components;

    <update the title with the data>

    <listen to the search for the data>

    <find the search>

    <wait for the search to be available>

In the outline, we took one of the items passed in, mvc.Components, and gave it a name, the registry. Waiting for the search to be available is as simple as listening to the registry:

<wait for the search to be available>= (<-U)

var handle = registry.on('change', findPanel);

Finding the search and attaching a listener to it is actually one of the two hardest parts of this code. First, because we have to find it, and the new panels layout makes that difficult, and secondly, because the change event mentioned above can happen multiple times, but we want to make sure we only set up our listener only once.

Below, the function findPanel lists through all the Splunk managed objects on the page, and finds our search. It does this by looking for a registry name that matches the ID of our search. The panel layout mangles the name, attaching the prefix “panelXX_” where XX is some arbitrary index number. (In practice, the index number is probably deterministic, but that’s not useful or important if you’re going to be using this panel on multiple dashboards.) Underscore’s filter is perfect for finding out if our search is available. If it is, we disable the registry listener and proceed to the next step, sending it the search name.

<find the search>= (<-U)

var findPanel = function() {
    var panel = _.filter(registry.getInstanceNames(),
                         function(name) { return name.match(/panel\d+_cputimesearch/); });
    if (panel.length === 1) {'change', findPanel);

This is the most straightforward part of the code. Having found the search name, we then get the search manager, get its results manager, and then set up a listener to it that will update the title with the data.

Splunk searches manage the task of searching, but not the actual data. That happens in a Result, which updates regularly with the growing cache of data from the server to the browser.

This code skips a ton of details, mostly about listening to the search for failure messages. That’s okay. This is just an example, and it works 99% of the time anyway. Since we’re going to change the title to include the longest-running process, and our search is pre-sorted, we just need a count of one. This Result uses the same dataset as the actual visualization and puts no additional strain on the Splunk server or bandwidth between the server and the browser.

<listen to the search for the data>= (<-U)

var setUpSearchListener = function(searchname) {
    var searchmanager = registry.getInstance(searchname);
    var resultmanager ="preview", {
        output_mode: "json",
        count: 1,
        offset: 0
    resultmanager.on("data", updateTitle);

The last thing we do is update the title. (Remember, that’s our goal). The panel’s title is found in the .panel-head h3 child DOM object. Finding the panel is trickier, but Splunk gives us an attribute with the name of the panel’s filename, so jQuery can find it for us. There’s a guard condition to ensure that we actually have some data to work with.

The names of the fields correspond to the final names in the search. I’ve always found Splunk’s naming conventions to be a little fragile, but it works most of the time.

<update the title with the data>= (<-U)

    var updateTitle = function(manager, data) {
        if ( !data || !data.results || !data.results.length) {

        var topprocess = data.results[0];
        $("[data-panel-ref=cputime] .panel-head h3")
            .text("Longest Running Process: " + topprocess["Process"] +
                  " (" + topprocess["CPU Time"] + ")");


One last detail: You want to be able to get to this page.

To do that, open the file at: APP_HOME/default/data/ui/nav/default.xml and replace the line for “search” with this:

<update navigation>=

<view name="index" default='true' />
<view name="search" />

Now restart Splunk

And that’s it. Put it all together, and you’ve got yourself a working application in which SplunkJS can tap into SimpleXML searches and exploit their data, even if that search is defined in an independent panel.

This code is available at my github at Splunk with SimpleXML and Javascript.

Table of Contents

Dear Gods, I’m not even sure why I should even bother, but the C++ experiments I’ve conducted recently were so much fun I’ve decided to put TOXIC (Terabytes of XML, Indexed and Compressed) and Twilight (A basic GraphDB with local indexing and aggressive caching, using RDF triples as index keys) back into my projects list.  I’m not even sure why.  It doesn’t seem like a very smart thing to distract me with yet more shiny.  But these would be fun, fun shiny.

Recently, while I was at Beer && Coding, one of the others came in with a problem that they’d been given by a potential employer. They’d hoped that we’d be able to help finish it. Nobody did in the time allotted, but I got pretty far with my Scheme version. However, Scheme wasn’t in the list of legal target languages.

The problem stated was:

Given a number (assume base 10) less than 10,000, write a program in
C++ that will reverse the digits of that number, calculate the
original number to the power of the new number, and print it out.
You may not use Boost, GMP, or any library other than that provided
by the C++ Standard Library.

I don’t know C++. I haven’t ever written C++ profesionally, and I haven’t actually looked at C++ since 1999 or so. As a professional, I’m aware of what’s going on in the zeitgeist, and at my job at Spiral Genetics I interacted with two very talented C++ developers a lot, so I was aware of things like the emerging C++ Standard Library and RAII and so forth. I didn’t know what they meant, but I had heard of them. I’ve also been aware of the emerging standards in C++11 and C++14, mostly thanks to Slashdot, Hacker News, and their ilk (don’t read the comments, don’t ever read the comments), so I’d heard about auto_ptr and C++11 lambdas and the like.

It took about an hour of googling to get up to speed on things like namespaces, containers, for_each, lambdas, and the like. I really like the new unique_ptr construction. That’s very nice.

My basic solution degrades to 4th-grade mathematics: Break the multiplicand up into a list of single digits, multiply each digit with the multiplier, then redistribute the values up the tens, hundreds, etc., etc. This solution is not particularly fast or space-efficient, but it has the virtue of being comprehensible by any ten-year-old.

As usual, I’ve provided a test suite, as well as a pair of utility functions for converting the list to a string, or an unsigned long. The latter only works with very small results. The executable, “cheapgmp”, works as specified in the problem statement.

The source code is, of course, available on Github.


February 2016
« Jan    

Recent Comments