Table of Contents


This is where I post when I feel the urge. You're welcome to read, or not, as best suits you.

If you feel like talking about something here, or about anything else, you can find me at camϴ (replace the "ϴ" with an "@").

2017-07-02 Worlds Most Useless (or maybe handiest) webserver

I have a server that I use in a slightly eccentric way: it doesn't run a webserver at all. I do other things with it, mainly run sshd so that I always have relatively easy access to a unix machine that is connected to the internet (always handy) and, more importantly, which holds a screen session that, in turn, holds my emacs workspace. This allows me to sit down at any computer with an SSH client at any time and immediately have every open emacs buffer available to me from the last time I stood up, even if I did the standing up from my laptop at Denny's and the sitting down at a public computer at school or the library.

In any case, I realize that it's weird in 2017 to run a server without anything listening on port 80, and I want to rectify that situation, but I don't want a full-blown webserver like Apache, Nginx, or even lighttpd. I'd have to install those, and configure them, and maintain them, and all of that is a huge bummer. It also doesn't make for a very good blog post. Instead, I'd like to homebrew half-assed something that does most of what I want, but probably fails badly in some edge cases. Hopefully, I'll learn something in the process (and so will you, Dear Reader).


So what do I want this server to do? Obviously, it needs to listen for connections on port 80, parse HTTP requests there, and respond with valid HTTP responses. Anytime I have a list of more than 1 thing, I like to write it down, so here goes:

  1. listen on TCP 80
  2. parse HTTP requests
  3. respond with HTTP responses

If I were going for minimalism (and I guess I am), I can take care of 1) with netcat aka nc on most systems. Thinking about 2), I guess if I lower my expectations I can probably get by without parsing requests, as long as my responses don't depend in any way on what the request contains. 3) can be taken care of pretty easily too, since nc can be set up to automatically dump some data into a TCP connection once it's made. Since we don't care what the client is saying to us, and aren't even parsing their requests, it should be enough just to hard-code a valid HTTP response, let nc return that, and call it a day. Let's take another look at that requirement list then:

  1. listen on TCP 80 nc handles this for me
  2. parse HTTP requests nope, don't care
  3. respond with http requests we'll craft a pre-written HTTP response and have nc respond with that

Great! I guess we've got a pretty good plan. What's the code look like?

#!/usr/bin/env bash
RESPONSE="HTTP/1.1 302 Found\nLocation: $TARGET\nContent-Length: 0\n\n"

while true
     echo -en "$RESPONSE" | nc -l -p 80

Notice the Content-Length: 0 header. This is necessary since we're not actually closing the connection, so we need a way to let the browser know that the response is over. According to the W3C:

The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.

In a mortal's tongue, this means that the header needs to be the length of the body of the response. in bytes. Since we're not sending any body (just a header), this should just be zero.

Does it work? We'll have to start it as root since it needs to bind to port 80:

# ./

And now we'll test it from another term:

$ curl -IL http://localhost:80
HTTP/1.1 302 Found
Content-Length: 0

HTTP/1.1 200 OK
Date: Thu, 29 Jun 2017 22:01:47 GMT
Server: Apache/2.4.18 (Ubuntu)
Last-Modified: Sat, 04 Mar 2017 21:47:34 GMT
ETag: "168d-549ee9c2f350f"
Accept-Ranges: bytes
Content-Length: 5773
Vary: Accept-Encoding
Content-Type: text/html


Nice, looks like it works!

Fiddly Bits

So our little webserver works, but we still need a way to nicely start and stop it. In other words, we need init scripts a systemd service file. I think I remember how to write these:

[Unit] Description=A Shitty Webserver

[Service] ExecStart=/root/bin/


This is basically the simplest-possible systemd service file. Install it to etc/systemd/system/ (preferably via symlink) and make sure that the shellscript is in /root/nc-webserver/ (you can put it somewhere else, but you'll need to update the service file).

● nc-webserver.service - A Shitty Webserver
   Loaded: loaded (/root/nc-webserver/nc-webserver.service; linked; vendor preset: enabled)
   Active: active (running) since Sun 2017-07-02 21:54:54 UTC; 1s ago
 Main PID: 8480 (bash)
    Tasks: 2
   Memory: 472.0K
      CPU: 5ms
   CGroup: /system.slice/nc-webserver.service
           ├─8480 bash /root/nc-webserver/
           └─8483 nc -l -p 80

Looks like it works!

Conclusion and Acknowledgments

Depending on your requirements, you now have a completely useless or completely optimal webserver with no extraneous code. I should note that the inspiration for this project came from the Shinatra repository on Github, though I have literally changed every line of code.

While this isn't the most practical webserver for most usecases, you hopefully at least learned something about TCP and HTTP.

2017-06-11 Presidential Actors

2016-06-11/presidents_by_actor.csv command: cat presidentsbyactor.csv | cut -d, -f 1,2 | sort | uniq | cut -d, -f2 | sort | uniq -c | sort -rn < presidentsbyactor.csv

Sometimes I become unreasonably interested in the answers to trivia questions of my own devising. The most recent one of these was "Which actor has portrayed the most real presidents in the movies?" This was prompted by my wife watching The Butler and my noticing Robin Williams as Eisenhower over her shoulder. Remember that he also (arguably–I'll get to this later) played Teddy Roosevelt in the Night at the Museum movies.

This seems like exactly the sort of question somebody would have already worked out and stashed on a Wikipedia–or at least Wikia–page somewhere, but few minutes of furious Googling provided no satisfying answers. It was time to take matters into my own hands.

Luckily, there is already a Wikipedia page that had all the data I needed to answer this question, though it was organized by president rather than by actor. I would have to do some massaging to get what I needed.

Data Cleaning

Rather than spend a lot of time writing a script to scrape the page and massage this data into a usable format, I did a cut and paste job in LibreOffice Calc. Some manual massaging got it into a state where I have rows of data like this:

President Actor Movie Year
James K. Polk Addison Richards The Oregon Trail 1959
Ulysses S. Grant Aidan Quinn Jonah Hex 2010
Franklin D. Roosevelt Al Richardson Cash and Carry 1937
George Washington Alan Mowbray Alexander Hamilton 1931
George Washington Alan Mowbray The Phantom President 1932
George Washington Alan Mowbray Where Do We Go from Here? 1945


It would probably be better and more comprehensive to come up with a query to run against the IMDB dataset, but I was impatient and wanted at least a preliminary answer ASAP. My wife had lost interest by this point, but I persevered.


Finally, with the data massaged just how I wanted it, I could issue this command and get my answer

$ cut -d, -f 1,2 < presidents_by_actor.csv | sort | uniq  | cut -d, -f2 | sort | uniq -c | sort -rn | head
  2 Robin Williams
  2 Ian Wolfe
  2 Bob Gunton
  2 Anthony Hopkins
  1 Woody Harrelson
  1 William Phipps
  1 William Petersen
  1 William Davidson
  1 William Daniels
  1 Walter Massey

Aha! Vindication! My observation about Robin Williams' acting career having an unusually high number of presidents in it was a correct one. According to this dataset, only 4 film actors have portrayed more than one president. Hmm, but I've never heard of two of these guys, and I remember Anthony Hopkins as Nixon, but not as any other president. Let's see which ones these guys did:

$ cut -d, -f 1,2 =<= presidents_by_actor.csv | sort | uniq  | cut -d, -f2 | sort | uniq -c | sort -rn | grep 2\  | sed 's/^[\ 0-9]*//' | while read actor; do echo "$actor:"; grep "$actor" presidents_by_actor.csv | cut -d, -f1,3,4; echo;  done
Robin Williams:
Theodore Roosevelt,Night at the Museum,2006
Theodore Roosevelt,Night at the Museum: Battle of the Smithsonian,2009
Theodore Roosevelt,Night at the Museum: Secret of the Tomb,2014
Dwight D. Eisenhower,The Butler,2013

Ian Wolfe:
James K. Polk,California,1947
Calvin Coolidge,The Court-Martial of Billy Mitchell,1955

Bob Gunton:
Woodrow Wilson,Iron Jawed Angels,2004
Richard Nixon,Elvis Meets Nixon,1997

Anthony Hopkins:
John Quincy Adams,Amistad,1997
Richard Nixon,Nixon,1995

Ah I guess I've never seen Amistad (it's rated R and I would have been 9 at the time). This and the others seem like candidates for my to-watch list, especially Elvis Meets Nixon whose title would seem very relevant to my interests and which gets a relatively fresh 74% on the Tomatometer.

In any case, we have an answer: There are four actors, based on this data, who hold this distinction. But wait, the data isn't the whole story.

The Real Answers

To move away from the world of shellscripts, and into the world of movie trivia pedantry, does Robin Williams' record even count? Recall that the "Teddy Roosevelt" in Night at the Museum is not in fact the man himself, but a wax mannequin. There's even a very touching line:

Actually, I never did any of those things. Teddy Roosevelt did. I was manufactured in a mannequin factory in Poughkeepsie. I never shot a wild beast. I'm not even brave enough to tell that beautiful woman [Sacagawea] I love her. But you… you gotta finish the job this time. You can't quit. I'm made of wax, Larry. What are you made of?

Presuming that the presidential portrayals in these other movies I haven't seen yet are "real" portrayals of "real" presidents, they should presumably count more than Robin Williams' performance does, since it's 2nd-order fake. A portrayal of something that is only a portrayal of a president, rather than a direct portrayal of one.

So, to really be accurate, we have to take away Williams' slice of this crown and redistribute it three ways, between Ian Wolfe, Anthony Hopkins, and Bob Gunton.

2017-04-14 MiLB Schedule in Org-Mode

I live in Austin, and like to go to baseball games. This means that, unless I want to drive to Dallas or Houston (and I very much don't), I have to make do with minor league baseball, specifically the Round Rock Express at Dell Diamond. In fact, this suits me just fine, since it's a beautiful, intimate little ballpark, tickets are relatively cheap, it's a short drive, and parking is easy.

It's close enough that I can decide after work on any given day whether or not I'd like to go to a game that night, so I thought it might be nice to have Express home games show up in my Emacs org-mode agenda. I started by finding the Express schedule in iCal format. The MiLB uses a site called to handle their calendaring (there are other formats as well) and it can be found here. Choosing either "Apple" or "Other" gives you an iCal file, since I guess iCal has become the de-facto calendar interchange format. Go figure.

Anyway, the reason I wanted an iCal is because somebody has helpfully already written an awk script that will take an iCal file and turn it into an org-mode one. It's called ical2org.awk and you can get it here.

Note that the default Ubuntu 16.04 awk is not gawk, as literally everyone would expect and prefer. It's some other one that nobody's ever heard of called mawk. Since the author of ical2org.awk is a practical-minded person, it relies on some gawk-isms, and you'll obviously want to uninstall mawk and install gawk instead. You could install them side by side, but honestly you probably want gawk anyway, so take this opportunity to uncripple your system. With that out of the way, you can go ahead and run the conversion:

~ $  awk -f ical2org.awk < milb-roundrockexpress.ics >
awk: ical2org.awk:272: (FILENAME=- FNR=43) warning: gensub: third argument `' treated as 1
awk: ical2org.awk:284: (FILENAME=- FNR=43) warning: gensub: third argument `' treated as 1
...snip 279 lines...
awk: ical2org.awk:284: (FILENAME=- FNR=2563) warning: gensub: third argument `' treated as 1

Well, that didn't go as well as planned. After some time spelunking in the awk man page, I figured out that this program actually relies on some behavior that works but generates a warning, which because of my output redirect, results in warnings in my output org file. I could just redirect stderr away from my output file, but it turns out actually to be just as easy to fix the two lines that are the problem:

~ $ diff ical2org.awk ical2org_fixed.awk 
<             print "* " gensub("^[ ]+", "", "", gensub("\\\\,", ",", "g", gensub("\\\\n", " ", "g", summary))) "\n<" date ">"
>             print "* " gensub("^[ ]+", "", "1", gensub("\\\\,", ",", "g", gensub("\\\\n", " ", "g", summary))) "\n<" date ">"
<             print gensub("^[ ]+", "", "", gensub("\\\\,", ",", "g", gensub("\\\\n", "\n", "g", entry)));
>             print gensub("^[ ]+", "", "1", gensub("\\\\,", ",", "g", gensub("\\\\n", "\n", "g", entry)));
~ $

(I also put up the fixed file here if you prefer)

With that, the script runs perfectly:

~ $ gawk -f ical2org_fixed.awk < milb-roundrockexpress.ics >
~ $

Turning It Up To 11

That's all well and good, but it's only good for Austinites like myself. Let's do the same for all MiLB teams. I dug into the page with Dev Tools fully expecting to spend hours digging through minified javascript calls before I gave up, but a little fiddling reveals that the Express file was stored at Could it be that simple? I grabbed a list of MiLB teams from here:

~/milb_schedules $ cat team_names_unclean.txt 
Team    Class   League  MLB Affiliation State   Tickets
Aberdeen IronBirds      Class A Short   New York-Penn   BAL     MD      
Vermont Lake Monsters   ClasTeam        Class   League  MLB Affiliation State   Tickets

…and cut it down like so:

~/milb_schedules $ awk -F"\t" '{print $1}' team_names_unclean.txt | tr [:upper:] [:lower:] | sed -e '1d' -e 's/[\.\ \/]//' > team_names_clean.txt

Then, I tried gathering iCal files for all of them:

~/milb_schedules $ time for team in $( cat team_names_clean.txt ); do wget${team}/milb-${team}.ics; done

There are 152 teams in this list, so it took a few minutes, but I was never rate limited or anything:

real    5m39.017s
user    0m1.688s
sys     0m0.552

Finally, I ran ical2org.awk on all of them:

~/milb_schedules $ for team in $( cat team_names_clean.txt ); do gawk -f ./ical2org_fixed.awk < milb-${team}.ics > milb-${team}.org; done

None of these are really of any use to me except the Express file, but hopefully they are to someone else. You can find all of the converted Org files in this zip.

2017-03-28 Software We Love/Software We Hate

More and more of our life – work life, social life, civic life, love life, family life – plays out on software platforms, and I'm not revealing myself to be some kind far-seeing futurist when I say that there doesn't appear to be any real chance of the trend slowing. Yes, software is eating the world, but even if it isn't licking its fingers and burping just yet, it does have an outsized influence on the way we live our lives. As such, it seems like a good idea to think about why people like some software, and dislike other kinds.

As good a place to start as any (and I think better than most) is to look at what software people actually, publicly, profess their affection for. Maybe it's just the circles I run in, but if I were to write down the top 3 programs I hear people most enthusiastically hail, it would look something like this:

  • Emacs, Vim, Sublimetext
  • programming languages and compilers of all sorts
  • Adobe Photoshop
  • Microsoft Excel

Coincidentally, I would say that the top 3 list of programs that I hear the most vitriol and complaint about would look something like this:

  • Emacs, Vim, Sublimetext
  • programming languages and compilers of all sorts
  • Adobe Photoshop
  • Microsoft Excel

So whatever it is that makes people really passionate about a piece of software is the same thing that makes people really hate it. All of these applications are ones that require a pretty steep leaning curve. Once you've done the work though, they seem to really amplify the amount that you can get done, in a Jobsian "bicycle for your mind" sense.

These applications will never be mass-market consumer applications like Google Search or Facebook, but they're there in the background, quietly allowing the people who make all that world-changing software to keep making it faster. IT is just the industry I'm most familiar with, but I'm sure there are other tools (software and otherwise) that fit this same niche in other industries.

I wonder what those are.

2015-10-02 Yance Man

I was recently in need of a decent resume site. Ideally, it would be something that was easy to update, and which was able to automatically produce both an HTML/web version and a printable PDF without duplication of effort. I thought for sure that there would be something simple and bulletproof out there already for exactly this purpose. If there is, please let me know, but I wasn't able to find any purpose-built software that I was happy with.

Then I realized that this was a perfect job for Jekyll, the very nice Ruby-based static website generator that I use for this very blog. That would take care of one piece, the transformation of simple (YAML) data files into a manageable static website through templating. But how would I automatically generate the PDF resume? I didn't want to update things in two places every time I made a change, for example by maintaining a parallel LibreOffice document alongside the website.

That's when I reached for wkhtmltopdf a handy little utility that uses the WebKit rendering engine (the heart of the Safari, Chrome, and Opera browsers) to produce a PDF document from an HTML page. Now, I can create a special HTML/CSS template, have Jekyll turn the YAML data into both a website and an wkhtmltopdf-ready page, generate the PDF from that, and serve the whole thing statically.

+-----------+         +-----------+               +------------+
|           |         |           |               |            |
| YAML &    +-------->|   HTML    |-------------->|    PDF     |
| templates |  jekyll |  pages    |   wkhtmltopdf |            |
|           |         |           |               |            |
+-----------+         +-----------+               +------------+

Now a single addition to any of my credentials, jobs, or projects and a simple rebuild would result in a new static site (e.g., along wth the corresponding PDF resume for emailing, uploading to HR systems, and so on. (e.g.

The addition of a third-party tool somewhat complicated my usual Jekyll deploy methodology, since a jekyll build by itself wouldn't be enough to generate the whole site. There are ways to create plugins and incorporate third-party tools into Jekyll, but I opted for a much simpler approach and just used make. Here's the simple Makefile used to build the site and PDF, removing the intermediate HTML document as it is no longer necessary:

site: _site/resume.pdf

_site/resume.pdf: _site/resume.html
    xvfb-run -a --server-args="-screen 0, 1024x768x24" /usr/bin/wkhtmltopdf \
    -s letter  \
    -B 1.5in -L 0.5in -R 0.5in -T 0.5in \
    _site/resume.html _site/resume.pdf; \
    chmod 644 _site/resume.pdf; \
    rm _site/resume.html

    jekyll build

Since this feels like something others might want to use, I removed all references to myself and hid them in the Jekyll ./_config.yml file. In theory, all you should need to do to clone should be to clone the yance-man repo, change the values in ./_config.yml to your liking, add your own entries in ./_jobs, ./_credentials, and ./_projects, and then let things rip with make. The static HTML files will show up in ./_site. I hedge that with "in theory" only because I know you'll want to tweak things to your own liking by messing with the Jekyll templates.

If you find this helpful, please drop me a line and let me know.

2015-07-20 Introducing Rote

The blog has been a little quiet recently. Part of that is that summer is upon us in Austin: the temptations of Barton Springs, barbecue, and just cold beer on the patio are exerting a natural force on me in the opposite direction of my $EDITOR and $SHELL. The other factor, however, has been that what little free time I have after all that swimming, meat, and fermented beverages, has been taken up thinking about project called Rote.

The good news is that the service is nearly ready for launch. The even better news is that the Wordpress plugin (which is useful even without the service, and which you should look into /right\now if you've ever had difficulty keeping your Wordpress passwords secure and under control/) is out and can be found at the Rote downloads page. It should be available soon via one-click install from the repository.

While I certainly don't anticipate that the launch of the hosted service will result in my having more free time, I am planning on making regular posts on a higher priority over the coming months, and have a few posts already gestating for upcoming release.

2015-05-09 Dawn

Dawn is something that I've been working on as an answer to the question that I believe a lot of developers (amateur and professional) find themselves asking, even if they never quite formulate it this way: "Why is it so much easier to hack together a CLI script than a web application?"

I think that part of the answer is that our development tools were born in the *nix shell environment, and that they simply haven't evolved very much to accomodate web-style UI/UX and deployment practices. In order to begin writing and testing a shellscript, all I need to do is type vim and I'm off and coding. "Deploying" the script usually as simple as chmod +x.

By contrast, consider the simplest possible scenario for deploying a web application, one that requires no back-end for storing state, managing users, and so on. Something braindead easy, like a tip calculator. Most experienced *nix hands wouldn't even write a script for this, but simply use bc, dc , or even python as suits their fancy. Let's imagine, though, that we want to write a standalone script and deploy it on a web-accessible page so that grandma can use it on her three-year-old Windows Phone at the Cracker Barrel. At the very simplest, we need to create an HTML document to serve to the phone, the Javascript to implement the calculator, provision a server or otherwise find some web space for it, and upload the files to the server or service. There are tons of libraries, frameworks, and deployment tools that make all of this easier, but none of them remove the need to do these basic things. It's all a pain in the ass, and its a bad model if we want to have the same fluency on the web that we do in our beloved *nix shell.

So /Dawn should live completely in the browser. The development environment should be the same as the deployment environment, just like a CLI app./

Worse, if we want someone other than ourselves or grandma to use the app, we'll need to spend some time making it look nice. This means using some kind of pre-written theme or CSS framework, lots of fiddly CSS changes, or both. Without this, our web application looks like a joke to a user-base that is used to their web applications seeing the same attention to design detail as Soviet propaganda posters and glossy architecture magazines. People don't want their web experience to look like a university course website from 1998.

So /Dawn applications should look respectable by default, without needing additional frameworks./

The other problem with the arms race of design and faddish, complex, layouts is that we're constantly reinventing the basic vocabulary users have to interact with web pages and applications. The worst example of this is the wave of monstrous sites that will actually scroll left for desktop users when they spin their scroll wheel down. Instead of contantly changing the semantics of our web applications based on design fads, or in a misguided attempt to make them more "intuitive", we should instead realize that the best UI is one that is eminently familiar to the user, and that the only way users can become familiar, and therefore adept, at using an interface, is if we leave it alone and give them a chance to learn it.

So /Dawn applications should have a consistent UI paradigm that is stable and cannot be changed by ordinary applications./

Dawn therefore will attempt to address all of these concerns, but it starts with a simple language. Why not just use Javascript? Because it's big. The ECMAscript specification is a 258-page PDF, and that's just the language itself, not including the DOM or HTML specs and the other documents they reference. Dawn is inspired by the Joy language, which is very small, has homoiconicity, and some other interesting properties like being concatenative (x foo bar baz in Joy///Dawn is the same as baz(bar(foo(x))) in traditional notation). I admit that this RPN-style notation may not be the most developer-friendly, but it does simplify the execution model (depending on the point of view, there is either no state, or the stack is the only state). There are also ways I can explore of providing a more traditional ALGOL-ish syntax while maintaining some of these benefits.

Dawn is very much a work in progress and not especially useful for any real-world purpose yet. In particular, there is no way to "save" programs besides copy and pasting them into the console command line, but it is already fun to play with, which you can do here. An overview of the extant operators is below.

'plus', 'times', 'minus', and 'divide'

Just what it says on the tin. All of the operators work on the top two elements on the stack.

'swap', 'dupe'

Also pretty self-explanatory; swap tucks the top element underneath the one directly below it, which then ends up in the top position.


Brings the third value from the top to the top, such that [a, b, c] becomes [b, c, a].

'kill', 'killall'

Removes the top value from the stack, and clears the stack completely, respectively.

'log', 'alert'

Write the top value on the stack to the browser's javascript console, or a javascript alert dialog, respectively.


This is the heart of Dawn's nascent UI model. This treats the top element on the stack as a string and creates a box with this name. The "name" of a box is both the identifier used to refer to it in later Dawn code, as well as the box "marquee" displayed in the UI, meaning that you only need to be able to see the box in order to write code to manipulate it.

Dawn boxes are receptacles for output, currently just text, but eventually images, etc. They are automatically reflowed based on screen size, and behave intuitively and consistently. By making the output model easy to reason about, we reduce the mental overhead necessary to deal with it, letting the user deal with solving their actual problems.

'write', 'append'

These operators write (erase and replace) and append text to the specified boxes. The stack is assumed to have the name of the box on top, with the text to append directly underneath it.


Execute the list on the top of the stack as if it were a program. Without getting into a lot of computability theory that I honestly only barely understand, this is the operator that gives Joy most of its interesting properties and allows recursion, and the equivalent of first-class functions in other languages like Javascript or Lisp.

'ifdo', 'loopdo'

Simple control operators that operate similar to do but conditionally. ifdo will look at the top of the stack, and if it is truthy, it will execute the list underneat it as if do had been called. loopdo will simply execute a list the number of times specified by the number on the top of the stack.

2015-02-14 Let's Write a Parser

Parsers are super useful. Without them, we would all have to operate and program our computers by twiddling switches directly in machine language like Grace Hopper or Mel did in the Dark Ages.

If you're like me, though, you never really stopped to think much about parsing besides in some CS class in college. If you're even more like me, you never took a CS class in college, and so you'd never really thought about it at all until one day you decided to start thinking about it, and then reading about it, and then writing some code in Javascript to get it all straight in your head and see if it works.

Ultimately, I hope to use what I learn here to build a clean parser for another project I'm working on (which should be in a publicly-sharable state Real Soon Now), but as a learning exercise I decided to tackle parsing fully parenthesized expressions. From the number of course websites you find when doing a search for that phrase, this is a very popular homework assignment in compiler classes like the ones I never took.

Fully parenthesized expressions are essentially regular math notation but without operator precedence (usually called "order of operations" when high school math teachers talk about it) because there are parentheses everywhere to distinguish what should be computed first. This, for example, is not a fully parenthesized expression:

2 + 4 * 10 / 2

Because you probably went to high school, you know that you should do the multiplication, then the division, and then do the addition last. But in a fully parenthesized expression, you have to do all that grouping explicitly with parentheses:

(2 + ((4 * 10) / 2))

Actually, I decided to use square brackets in mine, and not just because I like to be contrary. It is a significant and tragic fact that parentheses and curly braces are the main pairs of grouping symbols used in programming notation while, at the same time, square brackets are the only grouping operators that are accessible without having to use the shift key on a standard (US and others) keyboard layout. There's also the fact that "brackets" is easier to type and spell than "parentheses." Therefore, in my "fully bracketed expression" syntax, the same expression would look like this:

[2 + [[4 * 10] / 2]]

Great, we've decided what the syntax is that we're going to parse! Now, we probably just need to write it up in code and we'll be all set. From here on out, it's just a small matter of programming, right? Actually, no, we still have to do basically all the actual work of defining the grammar since, while you and I could read my description above and completely understand how to interpret [2 + [2 + 2]] or [100 * [4 / 1.927]], it's not actually specific enough to satisfy a computer.

Instead, we'll need to come up with a formal grammar, which sounds intimidating, and kind of is, but is really not that hard to figure out. There's something called Backus-Naur Form which also sounds intimidating, and also kind of is, but similarly is not very hard to grok if you just focus for a second (i.e. put down your phone, close your mail client and crack a beer, pour a coffee, or brew some tea).

There's this funny operator ::= that you can think of as a symbol that is used to specify the rules of the grammar. It could just as well be an equals sign, but I guess Backus and Naur liked to be contrary, which is something I can respect. I like to think of ::= as just meaning "is made up of."

There are some other operators too, but if you're relatively programming- and computer-literate, the meanings of those shouldn't really be surprising. | means "or" for example, * means just what it usually does in grep or sed, and depending on the variation you're working with, parentheses can be used for grouping and brackets (these handsome fellas: []) used to indicate optional parts of an expression. Anyway, it doesn't really matter to get too bound up with syntax for the grammar "productions" (this is the fancy CS term that is used instead of "rules"), since nothing will be parsing this right now except your own brain, so we can get a little sloppy with syntax if it helps us.

Let's start with some simple statements like "Whitespace is made up of a run of one or more spaces, newlines, tabs, carriage returns, or line feeds," and "A digit is made up of a single 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9 character." This is how Backus and Naur would write that:

DIGIT ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"
WHITESPACE ::= (" " | "\n" | "\t" | "\r" | "\f" )?

From there we build something called a DIGITSTRING as one or more digits, and finally define a number to be either one of those or two of them separated by a period. This allows for both integers and decimal numbers. We also define an operator to be any of the 4 arithmetic operations:

OPERATOR ::= "+" | "-" | "*" | "/"

So far, there's nothing tricky, really. Each of the productions matches a specific kind of thing and the more complex ones are made up of Now we get to the tricky, recursive bit. I should mention that the kind of parse we're building here is called a "recursive descent" parser. The "descent" part will make more sense later, but we need to talk about the "recursive" part now that we're looking at the last two productions in our grammar:


These are relatively straightforward too, not much different from the other rules we've seen so far, but with a little twist. We're saying an atom is an expression or a number, and an expression is made up of an opening bracket, an atom, and operator, and a closing bracket, with optional whitespace in-between all of them. Notice that EXPRESSION appears in the definition of ATOM and vice-versa, so this is a recursive definition, which is unavoidable since the expressions we're trying to parse are recursive. Note that this doesn't create an infinite loop because we can eventually resolve each ATOM to be a number.

Now we actually have basically finished all of the necessary intellectual work. The work of actually creating a functional parser from here on out actually is just a small matter of programming. In fact, there is a class of software called "parser generators" or "compiler compilers" that just take in BNF statements like these and spit out the code for an actual parser. YACC is historical Unix-y one that outputs C code, but there are similar packages for most languages and environments.

That's cheating, though, so let's actually do the leg-work of writing it for ourselves. A "recursive descent" structure makes this pretty easy, since all we have to do is write one function for each of the rules we already have.

For example, the DIGIT rule becomes this relatively simple little function:

function parseDigit(input) {
    if( input[0] === "0" || 
        input[0] === "1" || 
        input[0] === "2" || 
        input[0] === "3" || 
        input[0] === "4" || 
        input[0] === "5" || 
        input[0] === "6" || 
        input[0] === "7" || 
        input[0] === "8" || 
        input[0] === "9"   ) {
        return input.slice(0, 1);
    } else {
        return false;

You can see that this just returns the character it recognizes as a digit, and returns false otherwise. This isn't especially useful, however, since the rest of the input string kind of just disappears. In order to give ourselves some more useful semantics for dealing with the input string, let's define a kind of utility object we'll call a charStream:

function charStream(str) {
    this.chars = str.split("");

charStream.prototype.peek = function () {
    if(this.chars.length > 0) {
        return this.chars[0];
    } else {
        return null;
}; = function() {
     return this.chars.shift();   

Now, instead of dealing with strings directly, we have an object to keep some state for us and provide the nice methods "peek" and "eat", which allow us to look at the next character in the input, or consume the next character in the input, respectively. Now parseDigit() looks like this:

function parseDigit(inpt) {
    if(["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"].indexOf(inpt.peek()) != -1) {
    return false;

There, that's a lot better. Now that we have parseDigit(), let's do parseDigitString():

function parseDigitString(inpt) {
    var out = [];
    var next;

    while (next = parseDigit(inpt)) {

    if (out.length > 0) {
        return out.join("");
    } else {
        return false;

Here we just collect digits from the input as long as there are digits to be had. After that, we check to see if we actually caught any, and if so return that. If not, we return false, indicating that there's no DIGITSTRING here.

Looking back at the grammar, I see now that we have everything we need to write parseNumber():

function parseNumber(inpt) {
    var whole_part, fractional_part;

    if (!(whole_part = parseDigitString(inpt))) {
        return false;

    if(inpt.peek() !== ".") {
        return Number(whole_part);
    }; //eat the decimal place so we can parse for the rest of the digits

    if(!(fractional_part = parseDigitString(inpt))) {
        return false;

    return Number(whole_part + "." + fractional_part);

Now, instead of just looking for single characters or strings of like characters, we look for a specific format of input. Namely, we need either a run of digits or two runs of digits with a period character in-between. We do this by trying to parse out each part individually and failing if any necessary pieces are missing. Finally, we use the Javascript Number() function to turn the strings into an actual Javascript number.

So now we should be able to parse numbers! Let's do some quick checks to make sure things are working like we expect them to:

> console.log(parseNumber(new charStream("100")));
> console.log(parseNumber(new charStream("3.4")));
> console.log(parseNumber(new charStream("2..2")));
> console.log(parseNumber(new charStream(".1")));
> console.log(parseNumber(new charStream("this is not a number")));

Everything looks good. We get numbers out when we put numbers in, and false out when we put in nonsense. The next logical step would be to write parseAtom():

function parseAtom(inpt) {
    var out;

    if (out = parseExpression(inpt)) {
        return out;
    } else if(out = parseNumber(inpt)) {
        return out;

    return false;

We can't actually test this yet, since it depends on parseExpression() and we haven't written that yet (notice that we'd be in the same situation even if we had written parseExpression() first, since they both depend on one another). So let's write it!

You'll notice from the grammar that we need also need to parse operators and whitespace in order to write parseExpression(). parseOperator() is easy enough:

function parseOperator(inpt) {
    if(inpt.peek() === "+" || inpt.peek() === "-" || inpt.peek() === "*" || inpt.peek() === "/") {
    return false;

As for whitespace, a close look at the grammar reveals that it only contains optional whitespace; it's never required. I'm therefore going to preemptively make parseExpression() a lot less cluttered by extending charStream with this helpful method:

charStream.prototype.eatWhitespace = function() {
    while(this.peek() === " " || this.peek() === "\n" || this.peek() === "\t" || this.peek() === "\s") {;

As you can see, this just advances the input tape to the next non-whitespace character. Now we can write parseExpression():

function parseExpression(inpt) {
    var out = {};


    if(inpt.peek() !== "[") {
        return false;
    };//the brackets need to be there, but we don't actually need to keep them around for anything


    if(!(out.operand1 = parseAtom(inpt))) {
        return false;


    if(!(out.operator = parseOperator(inpt))) {
        return false;


    if(!(out.operand2 = parseAtom(inpt))) {
        return false;   


    if(inpt.peek() !== "]") {
        return false;


    return out;

You can see that it looks a lot like parseNumber but slightly more complex. Since I want to output JSON objects at the end of all of this parsing, we initialize an empty object, and then check to see if we can parse each part of the expression in turn, eating whitespace in-between. We fail if we can't parse them. Otherwise, we stuff them into the appropriate object property. We return the out object at the end if everything's gone right. Let's try it out and see what we can make of the expression we so laboriously converted to fully-bracketed notation earlier:

> console.log(
            new charStream("[2 + [[4 * 10] / 2]]")), null, "    "));
    "operand1": 2,
    "operator": "+",
    "operand2": {
        "operand1": {
            "operand1": 4,
            "operator": "*",
            "operand2": 10
        "operator": "/",
        "operand2": 2

It works!

Now, while this helps to show the basic shape of a recursive descent parser, it isn't really a very practical thing to use in production since it doesn't provide very good error messages or really any kind of exception handling at all. In a real-world parser, you would probably want the operand and operator properties of these objects themselves be objects instead of strings and numbers, and of course there's the small matter of actually operating on the resulting parse tree to do something useful with its output.

All of these are left as an exercise for the reader, or maybe me at some point, but not today.

2015-01-17 Trees are Just Lists

Trees are just lists of lists. That may sounds obvious to you if you hang out on comp.lang.scheme, know what a closure is, or really if you've ever used the word "tree" to describe something that isn't made of wood. Still, it's a significant fact, and it would be nice if the developers or specifiers of standard libraries in dynamic, scripty languages would remember it. Specifically, I'm kvetching about the fact that Javascript doesn't have, so far as I can tell, a built-in array "map" function that can recursively walk sub-arrays. I can see that there might be situations where one might want to map only the top level of an Array of Arrays, but can't we just use forEach or even just plain old for?

In any case, in the hopes that others on the internet may need a recursive Javascript map in a pinch and don't want to spend 5 minutes writing one themselves, I give you this:

function mapTree(tree, func) {
    function mapTreeHelper(x) {
        if (Array.isArray(x)) {
        } else {
            return func(x);

Now you can get back to whatever it was that you were actually coding up. You're welcome.

2014-12-26 Distributed Autonomous Art

The blockchain universe, still mainly a place for blind speculation on the relative values of various digital goods of dubious value (usually by exposing oneself to quite a lot of risk by trusting fly-by-night exchanges held together with chewing gum, PHP, and a prayer) is slowly growing an ecosystem of actually useful infrastructure for the creation of distributed applications other than simply payments. The most general of these is the Turing-complete Ethereum-style smart contracts, which are touted by the project's boosters (and myself) as an epoch-making technology that can help solve lots of important problems in business, government, and society. There are tons of more erudite thinkers that could expand on that and tell you exactly why or why not that might be true, but to be honest that game is a little bit tedious for me. Instead, I thought it would be fun to look at this new medium and do something purposeless with it. In other words, I decided to make some distributed, trustless, autonomous art.

The following contracts are all written in the variant of Serpent accepted by the latest (as of writing) version of PyEthereum (0.7.49).

Here's my first piece. Since blockchain-based technology mainly concerns itself with ownership and commerce, I thought my first Ethereum art piece tshould be a medtiation on the meaning of wonership. What does it mean to own something? More specifically, what does it mean to own something as intanible as a blockchain-based smart contract? To strip it down, I decided that, at minimum, that something needs to 1) be possible to find out who the onwer of something is, and 2) sell it to someone else, who then becomes the owner. Once I'd decided that, I realized that, since my work is software, that these requirements could be written up as a suite of tests, and so wrote a short assert-based Python script to run my tests:

import pyethereum
t = pyethereum.tester
u = pyethereum.utils
s = t.state()
c = s.contract('')

seller = s.send(t.k0, c, 0, funid=0, abi=[])
seller_balance = s.block.get_balance(t.a0)
buyer_balance = s.block.get_balance(t.a1)

#funid 0 => who_owns_me, 1 => current_price, 2 => reprice, 3 => buy
#initial price should be 2^254
assert s.send(t.k0, c, 0, funid=1, abi=[]) == [28948022309329048855892746252171976963317496166410141009864396001978282409984L]
#set a new price
assert s.send(t.k0, c, 0, funid=2, abi=[50000000]) == [0]
#check to make sure it happened
assert s.send(t.k0, c, 0, funid=1, abi=[]) == [50000000]

#try to set a new price as a different user
assert s.send(t.k2, c, 0, funid=2, abi=[5]) == [1]
#make sure it failed
assert s.send(t.k2, c, 0, funid=1, abi=[]) != [5]

#now the buyer will send the money and hopefull gain ownership:
assert s.send(t.k1, c, 50000000, funid=3, abi=[]) == [0]
#let's make sure they did:
assert s.send(t.k1, c, 0, funid=0, abi=[]) != seller

#seller should be at least 50000000 richer:
assert s.block.get_balance(t.a0) >= seller_balance + 50000000
#...and buyer should be at least 50000000 poorer:
assert s.block.get_balance(t.a1) <= buyer_balance - 50000000

#price should be at max again:
assert s.send(t.k0, c, 0, funid=1, abi=[]) == [28948022309329048855892746252171976963317496166410141009864396001978282409984L]

print "all good"

This may be the first instance of TDD as applied to art. Here is the contract that meets those requirements (and passes the tests):

def init():[0] = tx.origin #owner of the contract[1] = 2^254 #price he'll sell it at
def who_owns_me():
def how_much_i_cost():
def change_price(x):
        if msg.sender !=[0]:
                return 1 #message sender isn't owner, so don't change anything
        #they are, so change the price of the contract to what is specified[1] = x
        return 0
def buy_me():
        if msg.value >=[1]:
                send([0], self.balance)
      [0] = msg.sender
      [1] = 2^254
                return 0
        return 1

Ethereum-style contracts effectively sit on the blockchain and do whatever it is you've programmed them to do. Once you create one, it sits happily forever (unless it decides to "suicide") and runs its code whenever anybody (or any contract) sends it a message and enough "gas" to run itself for that execution. It can then check out the environment by seeing how much money it has, what you said in your message, who sent the message, send messages to other contracts, see what time it is (according to blockchain consensus), the block number and so on, and even send money around to other Ethereum addresses (which can be people or other contracts).

Here, the creator of the contract is also the initial owner, and the first function who\owns\me() does just what it says on the box, letting anyone know who owns it. The first requirement is therefore satisfied, since anyone (or any contract) can find out who owns the work. Anybody is free to buy it from him, however, provided they pay the current price. This defaults to 2\254 Wei, effectively the same as putting something on Ebay with a reserve price of Googol dollars. In other words, it's not really for sale, but the owner can set a lower price when they want to put it up for sale. Once they do, anyone can send another kind of message, and if it contains enough money they are registered as the new oner and the price set back to the maximum.

After that, I started thinking about how I could improve the contract. What are the biggest downsides of buying things? Probably the most onerous is that they don't always increase in value. From art, to houses, to junk bonds, to cars, the biggest drag with buying almost anything is that it could be worth less when you go to sell it again. This is an unassailable problem in the old-and-busted meatspace economy, but with Ethereum we'll finally have the tools to address it. Here's a small variation on the contract that guarantees you can never lose money by selling it for less than what you paid:

First we have to make sure to save the last sale price when a sale occurs, so we'll modify the buy\me() function slightly:

def buy_me():
        if msg.value >=[1]:
                send([0], self.balance)
      [0] = msg.sender[2] =[1] #save the last sale price
      [1] = 2^254 #take it off the market for now
                return 0
        return 1

Next, we simply change the change\price() function:

def change_price(x):
        if msg.sender !=[0]:
                return 1 #message sender isn't owner, so don't change anything
    if x <[2]:
        return 1 #this is a non-depreciating asset, so we can't sell it for less than we bought it for
        #everything checks out, so change the price of the contract to what is specified[1] = x
        return 0

Notice that we don't have to initialize the[2] slot to zero, since this is explicitly done as part of the EVM spec. This may seem slimy to C programmers, but is perfectly valid, and the serpent compiler won't optimize it out, at least not currently. I did actually check:

$ cat 
def init:[0] = 0
        return 0
def dummy:
        return 0
$ cat 
def init:
        return 0
def dummy:
        return 0
$ serpent pretty_compile | fold
[PUSH1, 0, PUSH1, 0, SSTORE, PUSH1, 0, PUSH1, 32, MSTORE, PUSH1, 32, PUSH1, 32, 
$ serpent pretty_compile | fold
[PUSH1, 0, PUSH1, 32, MSTORE, PUSH1, 32, PUSH1, 32, RETURN, PUSH1, 26, DUP1, PUS

You don't actually need to follow exactly what the bytecode is doing, though it's interesting to follow all the gymnastics needed just to create a contract that doesn't do anything. Notice that the version that doesn't intialize to zero is a few intructions shorter.

I have some more contracts either written or in the brain hopper waiting to be realized, but that's all for now. Note that none of these are actually published on any blockchain yet, so these are more blueprints or first drafts than actual pieces. Still, I think the point I'm trying to make, if I'm making one at all, is that if Ethereum is a rich enough medium to allow for subsersive (I flatter myself, but I'm so humble about it) art projects, then it is likely expressive enough for a whole universe of things that haven't even been thought of yet.

There are several smart contract platforms emerging right now, but the most exciting of these is called Ethereum. Ethereum, growing in about a year from an informal whitepaper about some novel ideas in blockchain technology to a worldwide organization regularly shipping working examples of an increasingly sophisticated ecosystem for full GUI-driven distributed "Dapps" (I was unfortunately not consulted on the coining of that word). This commitment to making the whole process of using distributed trustless applications attractive for non-technical citizens is the what will lead to its success over other groups working on similar smart contract schemes or if not then something very much like it in the future. In particular, this is an advantage over projects that are focusing on using or creating blockchains for their own specific application (Namecoin attempting to replace DNS, Ripple setting up an alternate payment clearing system), on replication Bitcoin with relatively minor changes to address specific gripes about it in particular (Litecoin) or simply as a vehicle for speculation (Dogecoin), or which are building the hard crytpo blockchain-side stuff while mainly punting on UI/UX by centralizing it in a web application (Counterparty and their Counterwallet).

Whatever specific project we settle one, I do firmly believe that we are entering the Blockchain Age, built on the Information Age much as that in turn was built on the Industrial Age. My silly sarcastic programming ditties are among the first drops in a deluge of innovation.

2014-07-12 Manchester Baby

I got it into my head recently that I could have a good time trying to build my own computer out of SSI and MSI-scale logic chips, eschewing the last 40 years of progress in computer technology and returning to the pre-microprocessor era in order to learn a few things, just like learning to build a campfire in the Boy Scouts taught me (something, I'm told) about myself and the human condition.

As part of my research for this, I've been having fun looking at the architectures of several different generations of computers, all the way from the early post-war machines to some of the early microprocessors, for architectural inspiration. There is an amazing amount of variation in the shapes that even a conventional stored-program computer can take, and I'm going to take a look at a few of the most interesting.

The first I'll look at is the common ancestor of basically every computer I've ever used, since it was the first functioning stored program Von Neumann architecture computer. Being a British machine, however, the machine's name is a product of that nation's penchany for ironic understatement: The Small Scale Experimental Machine, or more infomally "Baby."

It was the first stored program computer, meaning that the program is stored in the same memory as everything else, and you can change any of it quite easily, rather than having to move a lot of cables.

This machine was intended only to be a way to test the use of a new kind of computer memory before going ahead with building a full, "real" computer. That memory was the Williams tube, one of the Rube Goldberg lashup types of computer memory that was used in computers before core memory, and later the more familiar semiconductor chip memory, were developed. CRT tubes leftover from wartime radar sets were turned into a way to store 32 bit numbers. Perverselely, this was a pretty big improvement over the mercury delay lines used previously (and for some time afterward) because it was a random access system, meaning that a machine could basically treat it like we would use RAM today, whereas with mercury lines the machine would basically have to wait around until the data was available. There was also the added advantage of not having to have columns of mercury sitting around your computer room.

As an amateur student of computer architecture, the Manchester Baby is ideal to look at because it's pure vanilla nature made it basically the platonic ideal of a Von Neuman architecture computer.

  • Registers

    The only general purpose register was the single accumulator. To load the accumulator from memory, one was only given the option of a negated version of the value at any memory address, though you could store any value into memory without having it negated, meaning that if you had an address and wanted its contents in the accumulator, you would need to load the negative version, store it into memory again, and then load it back, negating it twice.

  • Branching and Program Control

    There were two jump instructions, both uncoditional (absolute and relative, in modern terms). Branching could be accomplished by an instruction that incremented the program counter twice during instruction execution instead of once (skipping the next instruction) if the value in the accumulator was negative. This is certainly an interesting approach, since it effectively allows the unconditional jumps to be turned into branching instructions, which presumably is why they felt so generous with the two different addressing modes for the jump instructions, knowing that they would also be reused as branching instructions.

  • Other Operations

    If you wanted to actually perform any operations on what was in the accumulator, you were limited to subtracting things from it. Addition could be performed by negating a number and using the subtractor to add by subtracting the negative. This instruction took one value from the accumulator and the other from the point in memory pointed to by the address part of the instruction word, storing the result back in the accumulator. This storing back to the accumulator was accomplished rather easily without any intermediate buffer, since the memory needed to be refreshed every cycle (they termed it a "beat") anyway, so changing the value there could be accomplished by simply substituting the new value for the old one in the refresh circuitry.

  • Memory

    The best part was that, even though you were limited to just the bare minimum of instructions to allow computation, you still only had 32 words (of 32 bits each) of memory to work with, meaning you really had to make each one count. That was alright in Baby's case, since the whole point of it was just to prove that the whole general architecture was possible with the technology they were using. That makes it just about equivalent to the machine I'm building, whose main purpose is for me to have fun building and using it. That said, I'll probably use something closer to 8-16 bits for my word size, 32 bit-wide memory chips being rather hard to come by, and I think I'll allow myself the luxury of more than 32 words as well.

    In terms of practical details that are of help to me in my quest, the use of a single-register architecture is interesting, but I don't believe I'll be using it, the reason being that I don't want to use a memory technology that needs to be constantly refreshed, as in the Williams Tube. Today, we'd call it DRAM, and it is significantly cheaper than the alternative SRAM. The difference is significant in multigigabyte installations, but for just a few K, it's worth it not to have to deal with the refresh circuitry. As a result, there would be no easy way for me to have my accumulator be both the source of one of the operands and the destination for the result without an intermediary buffer, and if I'm adding another register anyway, I may as well put it under the control of the sequencer so that it can be used in other ways that simply buffering the result of an ALU operation.

    The skip instructions are of more interest. They would allow me, in the same way as in Baby, to expand the capabilities of the instruction set efficiently. With just a few kinds of jump (unconditional branch) instructions, conditional skipping allows one to create a whole vocabulary of different branching conditions. Since the skip instructions don't need to carry around an address in their instruction word, there can be effectively as many of them as I like while still only taking up one instruction "slot" (I'm currently planning on a 4-bit instruction format, meaning that all of the skip instructions can fit into one of the 16 available instructions). In that way, with just two different jump instructions in different addressing modes, I can effectively have N*2 conditional branch instructions, where N is the number of skip instructions that I have.

    While I appreciate simplicity, especially since I am a klutz with a soldering iron, the architecture of this machine is a little too hardcore even for me. The takeaway from the Baby design, however, is that you can do a lot with a little, and that I shouldn't overcomplicate my design if I want to simply prove it's feasibility.

2014-06-05 Bedrock

Computers are super neat. See all this stuff on the screen in front of you? See all those other tabs with recipes for apple turnovers and Wikipedia articles about The A-Team? That's computers. Sure, it was possible for people to share information about recipes and popular culture from a half a generation ago before computers, but it was way harder. In the same way, it was possible to do a lot of fiddly math like cracking Nazi codes and computing gunnery firing tables before early computers got running. Computers made all of this way easier, and enabled a bunch of stuff that wasn't otherwise possible at all, like the movie Wargames starring Matthew Broderick and OpenSSL vulnerabilities. But let's focus on the part where computers make stuff that we would otherwise be doing anyway and make it way faster and easier.

It didn't start out that way.

Way back when, if you wanted to use a computer just to add two smallish numbers, you had to do all kinds of crazy stuff like learn binary, understand the difference between a half-adder and a full adder, the difference between one's complement and two's complement arithmetic, and then read a bunch of manuals written by some guys with skinny ties at IBM just so you could turn all of that into some inscrutable machine language. Then, you had to punch holes in these things that were like index cards the size of dollar bills from the 1890s. Finally, you would feed them into a machine that made a lot of noise and you would get your answer.

This is all way, way harder than just learning how to add and subtract, but computing had to start somewhere.

It is with this in mind that I have been trying to get a deeper understanding about low-level computer operations, and that's what prompted me to create Bedrock, which is a ruby script that simulates something like a simple computer. In a lot of ways it's much easier to program than learning to understand real low-level computer instructions, but basically it tries to get the idea across.

Bedrock simulates a computer whose memory is an array of 100 memory locations (0-99), each of which stores a 3 digit decimal number (000 - 999). Instructions are stored as 3 digit numbers just like the other data, so this is an example of a Von Neumann architecture, the kind that most computers use today. There are also two registers, Fred and Barney, and a index register whose purpose I will later explain.

When treated as instructions, the first digit of each "word" is taken to be the instruction opcode (though not in the case of "immediate instructions" as I'll explain later). The last two digits are taken to represent an address in memory (one of the 100 cells numbered from 0-99). Each instruction opcode does something different with what it finds at that address, and the thing that it does is what makes it unique and useful for the poor programmer trying to do something useful. Here are the opcodes that work in this way. Taken "AA" to stand in for the address portion of the instruction:

1AA - Load the contents of the memory address into Fred
2AA - The same, but with Barney instead of Fred
3AA - Store the contents of Fred into memory at the given location
4AA - The same, but with Barney instead
5AA - Move program execution to the given address on the next cycle
6AA - The same, but only if Fred is zero
7AA - Add the contents of the given address to Fred
8AA - The same, but for Barney
9AA - Don't do anything. Reserved for future additions

I mentioned an index register earlier, and the addresses that are used by the above instructions are actually not formed only by the "AA" part, but but by "AA" plus the contents of the index register. This is called a "direct indexed" addressing mode by skinny tie IBM types, and is the only addressing mode that Bedrock has.

The point of such a mode is to make it easier to increment through portions of memory, since incrementing and decrementing the index register is an operation with its own instruction. You don't see it above because it belongs to a special class of instructions. These all begin with "0" and don't concern themselves with memory addresses at all. They are as follows:

010 - Zero out the index register.
011 - Decrement (subtract one from) the index register
012 - Increment (add one to) the index register

You can see that there are only four of these types of instruction at the moment, but there is room for as many as 99, so there is considerable room for expansion if I feel it is needed.

Now, if this were a proper computer, somebody would have written something called an assembler (I still might, but I haven't yet). An assembler is a program that produces these strings of numbers after reading something that approximates source code, but is mostly cryptic three-letter codes. The advantage of the three-letter codes is that they're easier to remember than the even more cryptic numbers the computer understands. I don't have an assembler, but I do have the three-letter codes. Here are the ones that correspond to the instructions above:

LDF - 1AA - Load the contents of the memory address into Fred
LDB - 2AA - The same, but with Barney instead of Fred
STF - 3AA - Store the contents of Fred into memory at the given location
STB - 4AA - The same, but with Barney instead
JMP - 5AA - Move program execution to the given address on the next cycle
BRZ (branch if zero) - 6AA - The same, but only if Fred is zero
ADF - 7AA - Add the contents of the given address to Fred
ADB - 8AA - The same, but for Barney
??? - 9AA - Don't do anything. Reserved for future additions

Immediate instructions:

HLT - 000 - Halt. Don't do anything else.
ZEI - 010 - Zero out the index register.
INI - 011 - Decrement (subtract one from) the index register
DEI - 012 - Increment (add one to) the index register

Programming the thing is way more fun than actually doing arithmetic, which I suspect the main reason that computers caught on with engineers in the first place, even if it wasn't strictly easier to get anything useful done per-se.

2014-05-18 Son of Bashpodder Returns

My last post was about the tabcast utility that I had been working on. Well, it's now got a lot of the rough edges smoothed over and is available on (RubyGems)[], so installing it is as easy as gem install tabcast for anyone with gem already installed on their system.

The biggest improvement to date, and the one that has allowed me finally to abandon my trusted and venerable, heavily-modified is the addition of the --killfile option. This option leave out of the output any entries whose {{enclosure_url}} is found in the URL-per-line file specified. There is also a --guidkillfile option if you prefer that, but many of the feeds I consume don't publish guids for every post.

My podcatcher script now is now basically a one-liner, even though there are line breaks in the file for readability:

cd $basedir/to_listen
cat $basedir/podcast.conf | while read u; do
    /usr/local/bin/tabcast -f "\n" --killfile $basedir/killfile $u
done | while read mp3_url; do 
    wget "$mp3_url" 
    echo "$mp3_url" >> $basedir/killfile

podcast.conf contains the list of feed URLs by analogy to bp.conf, which is the equivalent file in the stock bashpodder script. Here, I read each of those feed URLs, call tabcast on each one with the "-f" flag that allows me to specify a format string using Liquid templates. All I'm actually interested in with this script is the URL to the audio file itself, so I use a format string that prints only that, followed by a newline. The "–killfile" option works as I mentioned and weeds out any URLs that I've already downloaded. This is the equivalent of bashpodder's podcast.log file. Then I read this list of audio file URLs, fetch them with wget, and add their URLs to the killfile. My ~/podcasts/to\listen directory is now brimming over with podcast episodes for me to listen to!

Some easy additions that I'll probably make would be to keep some other pertinent details alongside the mp3s, like the episode and channel titles, on the filesystem alongside the audio files themselves. I'm planning on doing this either by keeping $filename.json files next to them, or by simply writing that info into id3 tags on the mp3 files themselves. Either approach may in fact be a little too high-falutin' for my tastes, though, and a simple directory listing of mp3s suits me just fine at the moment.

2014-04-11 Son of Bashpodder

I like listening to podcasts, and as someone who likes using composable, pipe-able, Unix-style applications, the world of podcatching software is pretty frustrating. The best solution for Unix weenies that I've found to date is BashPodder, which is a small, hackable bash script that uses XSLT transformations to pull the enclosure URLs out of your podcast feeds and wget to download them. This is an improvement over web-based services and hulking iTunes-style GUI applications, but it still uses XSLT, which is difficult for most human beings, myself included, to understand and hack.

The problem isn't Bashpodder's fault, really, since XSLT probably is the easiest way to interact with XML in the way Bashpodder needs to. The bigger problem is that podcasts use XML at all. XML is not simple, and therefore should be eschewed by anything that bills itself as "Really Simple Syndication." More importantly for Unix weenies (who deal with complicated, over- or under-specified technologies all the time), RSS and Atom aren't well-suited to piping around a shell and slicing with sed, awk, and the other traditional Unix text processing tools.

Fortunately, other people with more patience have created libraries for the major scripting languages that abstract away most of these complexities and provide relatively simple interfaces for working with RSS and Atom feeds. The RSS module in Ruby's standard library is one of the best of these I've seen and really makes it easy to turn the M.C. Escher-esque XML soup of a podcast feed into something that is more palatable to a shell user. In fact, when combined with the Liquid templating gem, the whole project is pretty trivial.

Enter tabcast, a short Ruby program that lets you slurp down a podcast URL and turn it into a log-style, line-per-item format that is easy to use in shell scripts and interactively on the command line.

By default, tabcast will simply output a tab-delimited list of each item's Unix timestamp, title (with whitespace converted to underscores), and the URL of the item's enclosure.

This means that it's now easy to work with these on the command line. So, if we want to download the latest 5 episodes of the Kingdom of Loathing podcast, we could simply:

$ for url in $(tabcast | sort -rn | head -n 5 | awk '{print $3}'); do wget $url; done

…or if we wanted to download all the episodes into folders named after the year an episode was released:

$ tabcast | 
while read episode; 
do url="$(echo $episode | awk '{print $3}')";
year=$(date --date=@$(echo $episode | awk '{print $1}') +%Y); 
mkdir -p $year; cd $year; wget $url; cd ..; done

You can create custom formats as well, using Liquid templates as I mentioned earlier. For example, if you would prefer a pipe-seperated list with urlencoded titles, you would simply use:

{% raw %}
    $ tabcast --format "{{utime}}|{{title | urlencode}}|{{enclosure_url}}\n" <feed url>
{% endraw %}

The idea is to leverage all the awesome work somebody's already done in exposing a sane RSS interface to Ruby and use it to expose a sane RSS interface to command-line users. I wish this wasn't necessary and there were a simpler and more friendly media syndication format in widespread use, but that's a rant for another day. In the meantime, I will just continue to use tabcast and muddle through.

tabcast hasn't, for me, replaced Bashpodder. Instead, it's taken the place of all the messy XSLT transformations in my already much-modified copy of the Bashpodder script.

2014-03-20 Reverse Polish Cowgirl

Let me describe Reverse Polish Cowgirl, the most poorly designed language I know about, and I know all about it because I'm the one who designed it.

I've been playing around lately with Scheme (mostly Chicken Scheme). One of the major features of Lisps and Schemes is that they seem to inspire one to do Real Computer Science type stuff like design programming languages and implement interpreters for them. I was bitten by this as well, and have hacked together what might be charitably described as an interpreter for a new programming language. I call it Reverse Polish Cowgirl, or RPC, because it is a stack-based language, therefore using Reverse Polish Notation, and I'm living in Texas at the moment. The universe essentially forced me to make this stupid joke, and so I did, but I do apologize.

The language doesn't really have a grammar, exactly. It is a stack-based language, like Forth. The only thing the interpreter does is to seperate its input into strings based on whitespace. It then turns those strings into a list of Scheme atoms, and pops them off the stack according to some simple rules. The result is a horrible mess of a language, but here are the basics:

  • Numbers are any string that Chicken Scheme's string->number procedure will accept, but prefixed with a hash symbol ("#"). It means "number," get it?. Examples: #1 #1.344e21 #500010
  • Built-in functions begin with an exclamation point ("!"), because, you know, action!!!!. Examples: !add !eq? !swap !set !obliterate !get
  • Variables begin with "@" because that's what I decided at some point. I don't have a cute explanation or mnemonic device for this one. Examples: @a @b @_almost.anything,except&^__white,.\<\>.
  • There are some special keywords that aren't really built-in functions, but which probably should be for cleanliness' sake, although quibbling about the inelegance of this language is like complaining about the wetness of water. Anyway, here are the special keywords:
  • true and false are what you would expect them to be.
  • { and } and used for delimiting strings, as described below. { This is a string in RPC }
  • func and unfunc delimit user-defined functions (which, because this is a horrible language, are treated much differently from built-ins). Here I define a function that adds 2 to the value on the top of the stack, and assign it to a variable named @add2:
        #2 !add
    unfunc @add2 !set
  • ifelse is the only conditional in RPC. If the top thing on the stack is true, it executes the user-defined function two beneath it. Otherwise it executes the one directly beneath it. It actually does this by removing the function it doesn't want to use and then pushing the !do function on top of it. !do is the built-in function to execute a user-defined function. Here's what it looks like in action:
        { That's right! } !put
    unfunc @truthmessage !set
        { That's wrong! } !put
    unfunc @falsehoodmessage !set
        @truthmessage @falsehoodmessage
        #1 #1 !add #2 !eq?
  • Strings are set off by { and }. Any "words" that begin with @, !, or #, or which are also reserved keywords, should be prefixed with ~, or things will likely fail in ways that are hard to debug, since RPC doesn't produce error messages that are very useful, even to me, the author.

So that's the basic syntax. Now let's talk about some of the other ways it is horrible.

Many languages say that they treat functions as first class objects. This means that you can pass functions around as functions to other functions and possibly even do neat stuff like create lexical closures. RPC functions can be bound to variables, like Javascript and the Lisps, but don't mistake that for it being powerful or having closures. In fact, the only way define functions for later use is to assign them to variables. This means that user-defined functions are second-class citizens when compared to the built-in functions. User-defined functions can be called by pushing a copy of the variable that holds them onto the stack and then the !do builtin, or with the ifelse conditional construct as described above. Here's an example of !do (note that !mul is the multiplication built-in and !put pops off the top of the stack and displays it)

    !dup !mul
unfunc @square !set
#5 @square !do !put

This bifurcation between built-in functions and user-defined functions sucks, and it means that if you want to use a built-in function in an ifelse statement, you have to wrap the built-in in a user-defined function. This won't work, for example (!kill is the builtin that destroys the topmost value on the stack):

{ That's a big number! } !put !kill @numpizzas #100 !biggerthan? ifelse

You have to do this instead:

unfunc @put !set
unfunc @kill !set
{ That's a lot of pizzas! } 
@put @kill 
     @numpizzas #100 !biggerthan? 

The next thing that sucks about RPC there is no such thing as variable scope. Actually, more accurately, variable scope is always global. Also, you can't even define functions that use variables if they haven't been !set globally yet. I don't mean that you need to declare them, like in Pascal or something. You actually have to give them values, because the only way to instantiate a variable is to !set it. Like all mistakes in RPC, failing to respect this will produce a cryptic error message that's very little help in determining what went wrong unless you go back and read the source code to the interpreter, and even then it's probably not that helpful unless you are also very familiar with Chicken Scheme's error messages.

On top of these broken and missing features, the language does not allow comments, though you can workaround this by pushing strings to the stack and then immediately removing them with !kill. I call these "suicide comments," and they are a perfect metaphor for trying to write anything in RPC.

{ TODO: figure out why this is broken } !kill

So that's a taste of the kind of a mess you can get yourself into if you decide it might be neat to write a programming language. I plan on getting myself into even more trouble in the future by trying creating a more sophisticated language with a real grammar and parser. I may even include helpful error messages, but that may be a stretch.

I'm not including the Scheme source for the RPC interpreter, though maybe I will in a later post. In the meantime, if you have need of an (barely) interpreted language with almost no features and which barely works, let me know and I'll send to a copy of the source.