Wednesday, January 30, 2013

Tuesday, January 29, 2013

I Sold My eBook By Live-Tweeting A Laundry And Grocery Run

A simple story told in reverse chronological order.

If you're wondering what it means, well, so am I.

Monday, January 28, 2013

Some Tracks I've Made

Ice Europa: a chillout track. Started it with the intention of creating some weird hybrid LTJ moombahton.

Delphi: ambient drum and bass with traces of Detroit techno.

Earth Tones And Green Smoke: drum and bass. Greasy, funky sound. I came up with the bassline in about 1995 and used it in a hip-hop beat reminiscent of DJ Premier. Serves a different purpose here.

im not a ring on yr fingr: drum and bass. Started as an experiment to see if I could create an entire track in my Moog Slim Phatty synthesizer. Got distracted and started adding samples on top, but the bassline and most of the drums come from the synth.

Shadow: dubstep with a weird dose of My Bloody Valentine and pop.

Telescope: dubstep. Echoes of techno in this one as well.

"Rails Is Omakase" Followup Video 2: An Imaginary Conversation

A Simple Protocol For "You Are Not Your Code"

"You are not your code" is impossible to implement if you're being a dick to somebody about their code. The solution is simple: Don't be a dick. But how would a person without social skills know if they were being a dick or not? The ability to recognize whether or not you're being a dick is a social skill which many geeks lack.

Worse yet, "you are not your code" is impossible to implement if you seem like you're being a dick, even if you have no intention of being a dick. Geeks who fail at recognizing genuine dickishness fail even harder in situations like that.

I think I know some ways to help. First, a series of problems and solutions. Then, a really simple system to prevent people from freaking out. It's a little pompous but it just solves the fucking problem forever. I think in some circumstances the tradeoff is worth it.

Problem: You are not your code

In 2011, Sam Stephenson released rbenv, a Ruby environment manager, apparently designed to replace rvm, a similar project. Drama, of course, ensued:

a slightly antagonistic tone taken by rbenv's README (which has now been taken away) led RVM's maintainer Wayne E Seguin to vent some pressure on Twitter...

Part of the complaint made in rbenv's README was about RVM's "obnoxious" redefinition of the "cd" shell builtin.

Much more recently, Mr. Stephenson complained on Twitter that discussions about rvm and rbenv still center on what a great guy rvm creator Wayne Seguin is. I find this ironic, as rbenv's readme focused the introduction of rbenv around disrespect to rvm.

Mr. Stephenson works for a company which is famous for its great marketing, among other things, yet he appeared surprised that the frame of reference he used when he introduced his product is the frame of reference which people have used to understand his product ever since.

I raised this point on Twitter, with less refined phrasing, but did not receive a reply. Soon after, Mr. Stephenson wrote a blog post called You Are Not Your Code:

I have learned that in the open-source world, you are not your code. A critique of your project is not tantamount to a personal attack. An alternative take on the problem your software solves is not hostile or divisive. It is simply the result of a regenerative process, driven by an unending desire to improve the status quo.

While all this is true, in context it seems hectoring, unnecessary, and unpleasant. I sometimes think I see a similar pattern in the tweets and blog posts of a colleague of Mr. Stephenson's, namely David Heinemeier Hansson, creator of Rails. I've seen others express the same opinion.

Mr. Heinemeier Hansson wrote this on his blog:

There are lots of à la carte software environments in this world. Places where in order to eat, you must first carefully look over the menu of options to order exactly what you want. I want this for my ORM, I want that for my template language, and let's finish it off with this routing library. Of course, you're going to have to know what you want, and you'll rarely have your horizon expanded if you always order the same thing, but there it is. It's a very popular way of consuming software.

Rails is not that. Rails is omakase. A team of chefs picked out the ingredients, designed the APIs, and arranged the order of consumption on your behalf according to their idea of what would make for a tasty full-stack framework. The menu can be both personal and quirky. It isn't designed to appeal to the taste of everyone, everywhere.

It's good writing. It reads well. I even performed this blog post as a comic monologue.

But it loses credibility with a little context:

I hate to say it, but it might be fair to characterize the Ruby culture as packed to the gills with hipster drama queens. I might even be one of them; I'm blogging about drama, which sounds like something a hipster would do. But I think I know a simple solution.

The GitHub comment thread mentioned above describes a small change to Rails defaults. Many people complained that the change complicates deployments and makes design assumptions they do not agree with. Mr. Heinemeier Hansson replied that Rails is omakase.

I consider his response imperfect, but I can see some merit to his argument, because I've blogged a solution to the problem which this change poses. It's a one-line script which completely eliminates the issue, and which you can run automatically whenever you start a new project, just by adding a command-line flag to your rails new command.

So I think Mr. Heinemeier Hansson had a point, but I can't sympathize with his reaction. Saying "you disagree with me because your company is inferior to mine" is an ineffective persuasion strategy. And, both with Mr. Stephenson and Mr. Heinemeier Hansson, I think I perceive a strange combination: reasonable blog posts which appear designed to justify unreasonable behavior.

Solution: Apologize

In contrast, I want to highlight a simple apology which Corey Haines made on his blog. Mr. Haines had made fun of somebody's code on Twitter, upsetting them. Here's how he apologized:

Yesterday I made a mistake. Without thinking, I put up a mean tweet about some code that Heather Arthur wrote. And I want to apologize...

So, I'm sorry, Heather. There is no excuse for what I wrote.

There's a little more to it than that, but not much. When a person writes a real apology, you read it, and you're done. I think a similar strategy would have saved time and agitation for Mr. Stephenson and Mr. Heinemeier Hansson. Mr. Stephenson could have said "I'm sorry I disrespected rvm. Let's move on." Mr. Heinemeier Hansson could have said "I'm sorry but I'm not altering my decision. Let's move on." In either case, both would have been able to eliminate arguments.

Likewise, I want to apologize for any disrespect, in using Mr. Stephenson and Mr. Heinemeier Hansson as examples.

Problem: Haters Gonna Hate On JavaScript

I joined the Ruby Parley mailing list recently. It's a private list but membership can be as cheap as $10, which is the option I went with. The content's good, but it uses Google Groups, which is not a user experience I like or even understand.

Consequently, I only just the other day discovered a Parley email which had been sent to me a few weeks ago. I had made some comments about Ruby and its Modules, and in the newly-uncovered email, Josh Susser had replied with a tangent about JavaScript, and speculations about my personal experience with it, which I found irritating and irrelevant.

Solution: Bail

I cancelled my membership and requested a refund:

Hi, please cancel my $10 yearly membership. I want a refund. I believe a member of the podcast trolled me.

Transaction ID is xxxxxxxxxxxxxxxxxx. xxxxx@xxxxx is my PayPal address.

Not an anger thing. I've had near-death experiences, and nobody spends their time on internet flame wars after they've had near-death experiences.

I wish you all the best regardless, but I want my money back, and I want it to be a fast, simple process.

Thank you.

The bit about near-death experiences is true, and it influences your understanding of time management. "Life is fragile, nobody knows how much time they get, and I'm going to spend my precious remaining moments arguing with some dude" is not a sentence I can agree with, although it certainly has parts which make sense.

Terrifying Story About Facing Mortality

By the way, I don't mean the type of experience where you encounter a floating ball of light and learn the wisdom of the universe. I mean the kind where you're like, holy shit, I almost died. One of my near-death experiences involved this complete dipshit of a cardiologist who actually put a catheter inside my heart which was too big to fit, and caused my heart to stop pumping for a second. I was watching the whole thing happen on an x-ray monitor at the time. I literally saw my own heart stop right in front of my eyes.

Nobody's going to have that experience and then think arguing on the Internet is interesting.

In fact, it's entirely possible that Mr. Susser meant me no harm whatsoever. I wrote a reply to his remarks, making it as a civil as I could and being sure to include some useful information, so for all I know the discussion is still going on. I don't know, because I haven't been back, and won't. One way or another, it's over and done with.

Internet drama sucks up time and attention like some kind of horrible, greedy sponge-demon. Nine times out of ten the only thing that matters about Internet drama is how quickly and cleanly it can end.

Problem: The Battle of Los Angeles

A long time ago, Steve Klabnik tweeted a random comment about how he hated Los Angeles for some reason. I tweeted back that it's a great city. I grew up in Chicago, I used to live in San Francisco, and in my childhood I frequently visited London. I think I know great cities. And I live in Los Angeles.

Mr. Klabnik refused to back down, give my argument any credit at all, or apologize even a little.

(Edit: Mr. Klabnik emailed me to tell me that he remembers the events differently, and also to apologize regardless, which I think was pretty cool.)

Solution: Block

I blocked him on Twitter for more than a year. In that time frame, Mr. Klabnik moved to Los Angeles, where he now lives. We never settled the issue, but I unblocked him because he had written some blog posts I liked, I quoted those posts in my book Rails As She Is Spoke, and everything was fine until he started making fun of somebody's Node.js project and I blocked him again.

However, I block people all the time. I block them for angering me, or saying things I disagree with, but I also block them for bad spelling, bad timing, incomprehensible remarks, or reminding me of frogs. I recommend you look at Twitter as a game where you find excuses to block people and the person who blocks the largest number of people, for the largest number of different reasons, wins.

Blocking people on Twitter does not have to be about anger or hate. Blocking people on Twitter is about the fact that in real life, in chat systems, or on the phone, you can shut down a conversation, or change the subject, whereas on Twitter, there are situations where you can only either uninstall your Twitter client or block people. (Email has a similar problem, but it's much less severe, because email has a concept of deleting.) The Canadian singer Grimes said in an interview that she blocks her own mother on Twitter.

Only one person in my experience ever freaked out when I blocked them on Twitter, and they had a very widespread reputation for freaking out about things already when it happened.

A Simple System Which Always Works

Speaking of Mr. Klabnik, you're about to see his name a lot in the next few paragraphs. Before I left the Ruby Parley list, a conversation came up involving him, and in this conversation, somebody who did not know Mr. Klabnik referred to him as "Mr. Klabnik," instead of by his first name, just like I'm doing here.

I thought that was really cool. Jeff Casimir, who works with Mr. Klabnik, posted that he thought it was the first time anyone had ever said that. Technically correct but essentially wrong. I referred to Mr. Klabnik as "Klabnik" in my book on Rails, Rails As She Is Spoke, and did the same for everyone else I quoted, or whose ideas I discussed. I used last names throughout the book. To quote directly:

My goal is to make it clear that if I disagree with anyone, it's about their ideas. Using last names is more formal, but it's important to create an atmosphere of respect.

(In the initial version of my book, I followed this with a remark to the effect of "but, fuck Zed Shaw." I feel you have to draw the line somewhere. But this remark really upset one reader, so I've toned it down a bit. But only a bit. I really do think you have to draw the line somewhere.)

Where It Comes From

Nate Silver made me remember it, but I got this idea from St. John's College, where I studied for a year. St. John's has no majors, no optional classes, and no professors either, if you're going to be strictly literal; instead, you read a fixed selection of books, you discuss them, and you have tutors, equivalent to professors, who guide the discussions.

In addition to obvious topics like philosophy, literature, and history, you also study math this way, spending nearly the entire first year discussing Euclid's Elements, the first book in Western history about geometry and basic number theory, and proving its theorems. You memorize proofs, perform them, and defend them from criticism. In some cases, people will explore or invent alternate proofs.

Here's a YouTube video of two students at St. John's demonstrating the Pythagorean Theorem:

But due to a lack of context, and incredibly poor production values, the video doesn't capture what the experience is like at all. It's a very, very different experience than what people normally think of as math class. Euclid doesn't even mention numbers until you get to the end of the book. There's nothing but logic and abstractions until then, and you have to fit them all together in a coherent structure. It's a book about how a system built on a few very consistent rules allows you to build complex and sophisticated conceptual tools.

No matter how good you imagine that experience might be, for the purpose of preparing a young person to become a programmer, trust me: it's better than that.

Anyway, you also study science the same way, plus enough Ancient Greek to read a few snippets from the Elements in its original language, along with bits of Aristotle, Herodotus, Plato, Homer, and Sophocles.

The curriculum revolves around discussion as a means to figure things out. Because discussion involves disagreement, you refer to each other as "Mr. Foo" or "Ms. Bar" during classes at St. John's. This simple gesture of respect permits people to discuss provocative opinions -- for instance, religion vs. atheism -- with clarity and courtesy. And St. John's really is a place where religious people learn from atheists, and vice versa.

Why It Works

When Mr. Stephenson wrote You Are Not Your Code, he said something true. You are not your blog posts, either, or your emails, or your tweets, or your ideas. If you want to disagree with somebody's ideas, however, it's smart to differentiate between the person and the idea, and there's an easy way to do that: use the most respectful available term for the person. That makes it easier for the person to disengage their ego and actually listen.

If I'm doing serious writing about code, like in my book, I like to refer to other people by their last names, in the style of St. John's College, because this formal style signifies respect. You want that established from the beginning, as a default, so that it happens automatically. Disagreement without respect inhibits clarity, and you would have to be a compete fool to enter a programming discussion without anticipating the possibility of disagreement.

I suspect, however, that the number of complete fools who write open source software is greater than zero, in part because you can find, anywhere on the web, on any given day, staggering numbers of angry programmers expressing surprise because some discussion fell apart. Many programmers spend a great deal of time arguing without coming to any useful conclusion. I played a part in this horseshit myself for many years, but I've lost all interest in it.

So Do it

Formal modes of address constitute a simple protocol for disengaging egos. You type two letters, a period, and a space, and then you type the other person's name. It isn't complicated, but it is sometimes worth doing.

Saturday, January 26, 2013

Friday, January 25, 2013

Thursday, January 24, 2013

We Can Solve The Multiple-"Default"-Stacks Problem With Rails Application Templates

Steve Klabnik blogged that Rails has two default stacks: the official default stack Rails ships with, and the de facto default stack, used by most prominent Rails developers.

First, let’s talk about the actual default stack. Since Rails is omakase, I’m going to call this the ‘Omakase Stack.’ Roughly, this stack contains:

ERB for view templates
MySQL for databases
MiniTest for testing
Fat Models, Skinny Controllers

There has been a ‘second default stack’ brewing for a while now. I’m going to call this the “Prime stack”. This stack’s approximate choices are:

Haml for view templates
PostgreSQL for databases
Rspec/Cucumber for testing
Skinny models, controllers, and a service layer

But his metaphor breaks down a little, because the second stack isn't really a stack:

A considerable minority uses a stack like this. It’s important that the Prime Stack isn’t exact: you might not use Cucumber at all, for example, or maybe you don’t have a service layer.

On the (private, but cheap) Ruby Rogues "Parley" email list, I came up with an alternate interpretation:

I think the problem is the concept of a stack.

Everybody builds Rails apps their own way. There's a 37Signals (or
omakase) stack, a Thunderbolt stack, a Hashrocket stack, and many
other stacks, and in most cases the "stack" is not a fixed machine but
a fluctuating ecosystem. (I'm not sure if that's the right metaphor
either, but it'll work until I get an idea for a better one.) You
experiment with different gems on different projects, and some of them
you use more often than others.

There's an approved set of choices which represents Official Rails™,
but this doesn't have much to do with any actual consensus among Rails
developers. A lot of people depart from the canonical path a little
bit, for many different reasons. But Rails developers know the value
of convention over configuration, so we all try to develop
*conventions* for our deviations from the canonical path.

For example, "we use DataMapper on every project" becomes a local
convention at Company XYZ. But prior to Rails 3, you maybe had to
write a shell script or something to strip out ActiveRecord and swap
in DataMapper, so if you're doing this, and you do a lot of
consulting, you might start to accrue a little library of post-
installation scripts.

If this continues for a long time, and your company develops a
sufficiently sophisticated set of consistent deviations from Rails
canon, it's almost like you have an alternate version of Rails.

In my opinion the alternative "stack" is really an intersection of
many alternate versions of Rails. Within the context of companies
(i.e., semi-private communities), we've created a huge range of custom
Rails permutations, and people share a vague consensus about certain
consistent ways these individual branches differ from the official,
canonical stem.

(Trying to think of the right metaphor but nothing springs to mind.)

Anyway, I think this is a good thing, but I think the Rails docs would
be better if they acknowledged this process, and its artifacts, and if
we had ways to sharpen the focus and turn the vague consensus into
something more specific.

But I was wrong. Rails has supported this use case for many years, and the docs for the upcoming Rails 4 release describe it in detail.

Rails's creator David Heinemeier Hansson has a Twitter feed. The bad news: I find that feed kind of noisy. The good news: I found a good tweet in it today, and I think it deserves some attention.

Rails application templates provide a simple DSL for creating local, custom Rails "defaults." We've had this feature since Rails 2.

Caveat: in some cases, the DSL gets a little silly:

That's a custom command, in this DSL, for running arbitrary git commands. I might put any git commands in a shell script instead, but even if you opted for a Unix-centric approach, you can still trigger shell scripts (or indeed any arbitrary Unix software) from within a Rails app template via the run command, which allows you to execute arbitrary Unix commands.

Templates operate as command-line arguments to rails new, either as filenames or URLs, e.g.:

$ rails new my_application_name -m ~/my_company_defaults.rb
$ rails new my_application_name -m

Which would be somewhat equivalent to running this manually:

rails new my_application_name && ruby ~/bin/my_company_rails_customizer.rb

...but with a much nicer syntax.

Irony time: the use case for which Hansson recommends using Rails application templates is not actually a use case which the Rails app templates DSL supports.

Many of the least popular changes to Rails have revolved around very small changes to the standard Gemfile -- typically a matter of removing a few lines when you first generate your app -- but the Rails app templates DSL does not have a remove command. It does have a gem command for adding gems, but if you're removing them, you'll be using custom code in either Ruby or some other language.

However, this is easy in Ruby, and extremely easy in bash.

Say you've got some Rails Gemfile which includes both an awesome gem and a lame one:

[01-24 10:55] ~/dev/example_project
↪ cat Gemfile
gem 'awesome'
gem 'lame'

Removing the lame gem is a one-liner in bash:

[01-24 10:55] ~/dev/example_project
↪ grep -v "lame" Gemfile > && mv Gemfile

Problem solved.

[01-24 10:55] ~/dev/example_project
↪ cat Gemfile
gem 'awesome'

With the run command, that is of course also a one-liner in a Rails application template:

run "grep -v 'lame' Gemfile > && mv Gemfile"

Admittedly, it's a long line, and you have to be decent at Unix to make any sense of it, but it's still better than doing it by hand or arguing about it on the Internet. By the way, if you find a cleaner way to do it in bash, please gist me a solution (@gilesgoatboy on Twitter) as I'm no bash wizard. I hope it's obvious you can wrap that in a method:

def no_fugu_please(toxin, dish)
  unix = "grep -v '#{toxin}' #{dish} > #{dish}.new && " +
         "mv #{dish}.new #{dish}"
  run unix

Now you have a usable remove:

no_fugu_please("some_lame_gem", "Gemfile")
no_fugu_please("/bin", ".gitignore")

If you don't want to litter your code with obscure sushi jokes, you could even just use remove for your method name:

remove("some_lame_gem", "Gemfile")
remove("/bin", ".gitignore")

It's important to realize, however, that this does not solve the fundamental problem, on its own, of Rails having two "default stacks." The only way for the community to solve that problem is to both use Rails app templates and share them on GitHub.

Apart from anything else, if you can count the number of Rails app templates which throw away one gem and replace it with another, you now have an objective metric for what techniques the Rails community favors at any given time, as well as a subjective metric ("who uses what") for the credibility of individual gems and tweaks.

So please, use Rails application templates, and share them on GitHub.

Update: improved Unix-fu from Anthony Moralez and more from a Zander.

Also a GNU sed version and old generator web sites.

Sunday, January 20, 2013

Trinkets For Command-Line Performance

Take a peek at this MIDI controller:

Twelve switches which send MIDI, and two expression pedals which send MIDI. Most people use it to control loop-triggering in Ableton Live.

But it can do more. If you can write code which takes MIDI input, sends MIDI output, and retains the full power of a Turing machine for processing the MIDI in-between, it means that the twelve switches on this box don't just have to give you a total of twelve loops you can trigger.

Consider the chorded keyboard:

A keyset or chorded keyboard (also called a chorded keyset, chord keyboard or chording keyboard) is a computer input device that allows the user to enter characters or commands formed by pressing several keys together, like playing a "chord" on a piano. The large number of combinations available from a small number of keys allows text or commands to be entered with one hand, leaving the other hand free. A secondary advantage is that it can be built into a device (such as a pocket-sized computer or a bicycle handlebar) that is too small to contain a normal-sized keyboard.

You can control emacs or vim from a chorded keyboard. I have no plans for that, though. I mention it because you pilot either emacs or vim with a combination of muscle memory and brief commands, but that combination permits extraordinary fluidity. Using bash works the same way, and bash is Turing-complete.

Much current UX thinking revolves around gestural interfaces, but I find that to be a shallow idea. You can communicate much more sophisticated instructions to a machine by using a programming language than you can via gestural interfaces. For example, most people conduct business among one another using words, rather than poking each other, except in the case of soldiers and porn stars. I can think of one exception, that gesture recognition can become much more sophisticated when the person gesturing knows American Sign Language, but even there, the power comes from the language, not from the gestures. I think the world of music software needs more domain-specific languages and less gestural interfaces.

I also really like this:

This box receives USB input and sends output in the DMX protocol, which controls theatrical, club, and concert lighting. You could, in theory, program this box via CoffeeScript, Ruby, Perl, or any of several other languages, or indeed design a chord-based 10-key language for it. You can, in practice, plug the other end into a huge variety of lasers, spotlights, smoke machines, and spotlights.

I have both of these widgets, but I haven't gotten them to work yet. I'm hoping to fix that in the coming year via my side project, Teaching The Robots To Sing, a loosely-defined ongoing video series which I started last year.

Saturday, January 19, 2013

CoffeeScript Driving Drums And A Bassline

CoffeeScript Bass Line

Friday, January 18, 2013

Higher-Order Functions: What Are Their Analogues In Human Grammar?

Get ready for some epic hand-waving.

I'm reading Reg Braithwaite aka Raganwald's new book JavaScript Allongé, and -- because it's a Leanpub book and therefore to some extent a work in progress -- subjecting it to some pretty merciless copyedits as I go.

JavaScript Allongé explores JavaScript in detail, from the perspective of a Lisper with a console and a curious mind. I don't think the book contains a word about either concurrency or the DOM. Instead, it pokes its way through JavaScript itself, taking things apart to see how they work, like a curious gnome enjoying a leisurely stroll through a giant machine. Or like Donald Duck In Mathmagic Land, but with a strong emphasis on higher-order functions.

From the book's blurb:

JavaScript Allongé emphasizes functions as first-class values, and topics built on functions such as objects, prototypes, "classes," combinators, method decorators, and fluent APIs. JavaScript Allongé begins at the very beginning and progresses step by logical step through JavaScript's unique approach to functions, idioms, and even syntactical peculiarities, so that by the end of the book you'll have worked your way from the simplest of functions up to combinators that create method decorators for use with classes and objects.

Anyway, I want to pick at one of the specific metaphors in the book:

Pure functions that act on other functions to produce functions... are the adverbs of programming.

As I understand it, the argument goes like this: higher-order functions resemble adverbs because functions resemble verbs, and higher-order functions modify functions. Functions resemble verbs because they represent actions, rather than objects, although that's a bit tricky since functions basically are objects in JavaScript. All in all, it's a reasonable argument, but I disagree with it.

I disagree because there's more than one way to modify a verb, and if you want to draw parallels between the grammar of human language and the way programming languages work, you have many, many human languages to choose from. Different languages modify verbs in different ways.

My favorite human language is Attic Greek, the dialect of Ancient Greek which Plato, Aristotle, and their contemporaries in Athens used. Attic Greek, and Ancient Greek in general, features an astounding and complex range of prepositions.

Prepositions modify verbs by indicating the manner in which a verb operates. We have them in English; for instance, consider the difference between talking to a person and talking at a person. "To" and "at" are prepositions, and they modify the meaning of the verb "to talk."

Ancient Greek prepositions are more numerous, more subtle, more sophisticated, and more powerful than English prepositions. Here's a diagram of some of the more popular Greek prepositions:

As pretty as that is, you'll probably get more mileage out of this version, with transliterations and annotations in English:

What makes prepositions so important in Ancient Greek? To quote a web site on the topic:

Prepositions are often combined with verbs to form compound words.

(That web site has a religious perspective on the language; this happens often, because the core religious text of Christianity was written in a dialect of Ancient Greek. You can get all the linguistic benefit and ignore the religious content if you like. It's certainly how I approach the topic.)

Anyway, you can fuse Ancient Greek prepositions onto verbs, and you can also fuse them with each other. You can do a tiny smidgen of this in English -- consider the extremely relevant English term "overthinking" -- but Greek takes it to dizzying heights. For a wonderful, lurid example, Ancient Greek used compound verbs to describe sexual positions. Different sexual positions got their own verbs. Prepositions would fuse with each other and the verb itself to describe the exact way in which the verb occurred. So the translation for "doggy style" would be something like "kata-eis-fucking," or "behind-into-fucking." Some pornographic orgy situation might get a term like "inbetweenalongwithfucking." ("Meta" might be in there somewhere, too.) Sex in public would only need one preposition and one verb: "emprosthen-fucking," or "fucking-in-the-presence-of."

By the way, I think German features a similar approach to verb construction, but I don't know for sure. Also, if the lugubrious nature of the example disturbs you, that Bible web site features less provocative examples:

Prepositions are often combined with verbs to form compound words. The effect of the preposition on the meaning of the verb varies, but we can loosely categorize most of these effects as follows:

1. The meaning of the preposition is combined with the meaning of the verb.
For example βαίνω means "I go." Remember that κατά can mean down. Accordingly, καταβαίνω means "I go down."

I know some of you are thinking "that's what she said" right now. I've met my readers, and I'm very familiar with your level of maturity. But I'm sure we can find something calmer in a subsequent paragraph:

2. The meaning of the verb is intensified. Compounds intensified by a prefixed preposition are sometimes called "perfectives" because the action is viewed as carried out to perfection, i.e. to completion. For example, ἐσθίω means "eat," but when κατά is prefixed to form κατεσθίω, the meaning is "devour" (see κατεσθίω used in Mk. 12:40). Here, perhaps we see something reminiscent of an English idiom that makes the Greek seem less strange. If we talk about some one devouring his food, we may say "he eats it up."

OK, now I bet my female readers are thinking "that's what he said". I should have known this was going to end badly. Anyway, in the same way that you can combine prepositions with verbs to form new verbs in Ancient Greek, you can combine higher-order functions with other functions to form new functions in JavaScript, Lisp, Ruby, Perl, or any language which supports higher-order functions.

Because of this, I think using adverbs as a metaphor to describe higher-order functions is the wrong way to go. However, since you write code to understand it later, and English lacks the preposition-fusing features of Ancient Greek, I find that in real life, I tend to name my higher-order functions with the present participle.

Present participles modify a verb in a clause. Speaking in terms of this sentence, I'm using "speaking" as a present participle to modify the verb "using," and I'm using "using" to modify "modify" (twice!). Present participles in English make good names for higher-order functions because, like adverbs and Greek prepositions, they modify verbs. But I think they make better verb-modifiers in English than adverbs do, because they add parallel, ongoing action to verbs, while adverbs add characteristics or context.

I got this pattern from Rails several years ago, but I don't remember the name of the exact method, and the best example I can find in the current codebase doesn't quite fit. For the sake of argument, here's a CoffeeScript example instead.

Here contradicting is the present participle, while deny and affirm are straight verbs.

In Ancient Greek, you could use ἀντί, which conveys opposition and sort of means "against." The word inspired, sounds like, and roughly corresponds to our "anti," as in "anti-war movement" or "anti-pong," the quirky British slang for "deodorant." ("Pong" is the British word for a minor but offensive odor. It also functions as a verb, i.e., "these old, dirty socks pong a bit.")

With ἀντί, you have the disadvantage that understanding Ancient Greek prepositions requires specialized knowledge, and programmers are hard enough to hire as it is. But the advantage is that you can chain Greek prepositions almost endlessly, while endless chains of present participles in English can look a bit odd.

I think that's actually a big deal. I suspect that if most programmers knew Ancient Greek, you'd see a lot more functional programming.

"Sapir-Whorf" is a recurrent meme from the last several years of programming discussion. It refers to research by linguists which show that the language you use shapes your thoughts. I'm told that the way programmers use the term misrepresents the linguistic research, so I bring up the topic reluctantly, but the meme spreads and survives because everybody who's worked with a few very different programming languages needs a word for the different modes of thinking you experience from language to language.

Consider how complicated it is to talk about this in English, and compare that to how simple it would be if all I had to say was "you chain higher-order functions the same way you chain prepositions when you want to describe a sexual position." An entire way of conceptualizing action, which requires an unbelievably erudite point of view for an English speaker, required nothing more than knowledge of the crudest slang for a speaker of Ancient Greek.

This is because Ancient Greek expresses some ideas with a subtlety and nuance which English can't match, and its preposition-fusing compound verbs play an important role in that. When you write code with higher-order functions, you often chain them together in complex ways, so the analogy's very close. But, if you haven't studied Ancient Greek, I think present participles function better as an analogy for higher-functions than adverbs do.

However, I hope it's obvious that if this is the biggest complaint I have about a book, then I have to call it a pretty thought-provoking book.

Thursday, January 10, 2013

Rails/OOP Book Sales Numbers, And Why My Hourly Rate Is A Vector

In late November, just before Thanksgiving, I wrote a 97-page ebook. It's a critique of Ruby on Rails from the perspective of OOP, but it's also the opposite.

I wrote the book in a week -- in fact, if you don't count days of pure resting, then I wrote it in five days. It has a decent web site but it launched with nothing but a blog post. I deliberately under-promoted it, and my writing workflow was so absurdly oversimplified it makes Lifehacker look like NASA. You can only pay with PayPal, which means a lot of people might not have been able to buy it. I haven't made the Kindle version available yet; it's PDF only right now.

The book's made $11,009.30 so far. I sold 309 copies.

Yesterday I downloaded some sales history from PayPal, and tonight I plugged my sales numbers into a quick Ruby script to produce a bar chart of daily sales.

Here's what it looks like from a distance:

Check out the gist to zoom in and see the chart in detail. I think the two biggest spikes came from appearances in Peter Cooper's Ruby Weekly e-mail newsletters, and I think the spike on Dec 31 came from a tweet by Michael Feathers, despite the fact that he didn't even mention my book:

This is all just conjecture; I never even set up Google Analytics for this product. If you know anything about selling information products online at all, you'll also notice there's no email signup form. I approached this project with minimal marketing, but it's still very profitable.

If you assume I worked 8 hours on each of the five days when I was writing my book, then I made a little more than $275 per hour during that time ($11009.3/40=$275.23). Eight hours is a safe approximation, although I don't actually know the real schedule.

However, the best thing about selling information products is the retroactively expanding hourly rate. Nobody paid me to do the actual work, so initially, my hourly rate was zero. If we assume I worked forty hours on the book, then when I sold my first copy, my hourly rate was about 93 cents per hour ($37 / 40 hours = 0.925). Every sale bumped up my hourly rate by around 93 cents. So if I sell 100 more copies, the hourly rate (for the time that I already invested) will go up by $92.50. So it's probably $275 now, it'll be $300 soon, and so on, and so forth.

But I don't think of myself as having a numeric hourly rate. Since my hourly rate is increasing over time, it makes more sense to model it as a vector. It has a position (somewhere around $275) and it has a direction (up).

Being a programmer, this is how I understand the business of making and selling information products. You decouple income from time, and you convert your hourly rate from a number to a vector.

This is kind of like a physics problem, though, with their hypothetical frictionless universes. In reality, I also spent seven years working enough with Rails to form detailed opinions about it. And of course I spent time marketing the book, but not much. My primary marketing tactic was to give the book away for free. If I quoted you in the book, or I talked about a presentation you made, you can have the book for free. (And most of you already do.)

Promoting your book by giving away free copies is awesome. I recommend it. In 2010, I made a bunch of information products -- mostly videos -- and I did a bunch of dramatic launches. I had countdowns. I made products available only for limited time periods. I did loads of silly things like that.

All these things work to some extent, and I'm not denouncing them, but this time, I wrote one guest post and I gave the book away to people who I knew would be interested. I believe this strategy makes me more money. It's definitely more fun and less hassle.

I'll probably upgrade my systems, so I can sell without PayPal, worldwide, with email marketing and affiliate programs, but I'm more interested in writing another book.

(And, obviously, giving it away for free.)

Wednesday, January 2, 2013

Question For My Readers: How Would You Implement This?

Update: Answer's apparently zsh.

Peter Norvig showed that it's incredibly easy to write a probabilistic spelling corrector in a very small number of lines of code. His readers, which means virtually every hacker on the planet, have ported or re-implemented his demo in a staggering number of languages.

Meanwhile, I added this to my bash profile:

alias rpsec="rspec"

I could probably re-implement Norvig's solution without even re-reading it; I read one of his AI books years ago and built an entire breakbeat improviser around probability. But where I want that probabilistic spelling corrector is in the shell. And I have no idea how you would add that to the shell, although I think some shells might already even have it.

A) Do you know which shells those are?

B) Do you know how hard it would be to add this to bash?

If you have a good answer, please email it to me or tweet it:

Tuesday, January 1, 2013

Rails, Ruby, And Type-Checking

Here's a pair of tweets:

I guess I was pontificating a little, but I want to go into more detail.

Rails does something brilliant with its association classes like belongs_to and has_one; it gives composition the same kind of emphasis and importance that inheritance already enjoys in every class-based object-oriented language (e.g., everything from C++ to Python).

Rails lets you declare composition right at the top of the file, right after you declare inheritance:

class Sub < Super
  has_one :association

This is part of ActiveRecord, but it's not intrinsic to databases. It's intrinsic to objects themselves. Imagine Rails never used a database. Would belongs_to and has_one still be useful? Of course they would. I really believe that languages should treat this as a core feature. If composition matters more than inheritance, but language features put a spotlight on inheritance while downplaying composition, then you're not dealing with an ideal design. There's a mismatch.

There's more to belongs_to than databases. And you can see it if you ask a simple question: if Rails never used a database, what difference would there be between belongs_to and attr_accessor? Say you wanted to copy this part of Rails. It would not be enough to do this:

alias :belongs_to :attr_accessor

And the reason it would not is because Rails association class methods use a type-checking system. Here's something you can do with attr_accessor:

Foo.attr_accessor :round_thing
@foo =
@foo.round_thing = @square_thing

Wow! Look at you and your reckless shenanigans. You just put a square peg in a round hole! That's right - you didn't just talk about it. You did it. And you got away with it, too, because there's absolutely nothing in Ruby to stop you. You are mad, bad, and dangerous to know.

But if your database is full of RoundThing and SquareThing ActiveRecord models, then there might be something in Rails which stops you. Because in Rails, you can't do this:

Foo.validates_roundness_of :round_thing
@foo =
@foo.round_thing =

In fact, you can't even do this:

Foo.has_one :round_thing
@foo =
@foo.round_thing =

Of course, it's easy to implement basic opt-in type-checking in Ruby:

class Foo
  attr_reader :square_thing
  def square_thing=(thing_with_right_angles)
    raise "hell" unless thing_with_right_angles.is_a? SquareThing

But it's easier to read the Rails equivalent:

class Foo
  has_one :square_thing

And it makes a lot of sense to abstract that out into a pattern. That shortcut should, in my opinion, become a common idiom in future object-oriented languages.

Although the type-checking with association classes is very cut and dry, with its validations, Rails provides an extremely customizable and mostly optional type-checking system. This is one of the weirdest parts of Rails: a type-checking system for attributes that lives on the class which has the attributes, rather than on the attributes themselves. But it works really well. A lot of people have strong opinions about type-checking, calling it a terrible idea or an absolute necessity. Rails makes it easy for you to say "I'm going to use type-checking here, but I'm not going to use type-checking there."

Yet this also leads to inconsistent implementation. For example, you can easily bypass validations with update_attribute, which is less a matter of the type-checking system being optional, and more a matter of it just not being there sometimes, for unguessable reasons of its own.

When I lived in New Mexico, I once house-sat for a guy who had a "pet" which was half-dog and half-coyote. This "pet" was not really domesticated. It was less of a pet and more of a canine homie. You could really say "what up, dog?" and mean it in this situation. The animal was friendly, independent, and half-wild. It could effortlessly jump over a 7-foot fence, and it sometimes liked to wander into the woods for a week. That's kind of like the "type-checking system" in Rails. It disappears and reappears on its own schedule.

Coyote (Canis latrans)

Consider Gary Bernhardt's gem do_not_want:

>>, 5)
DoNotWant::NotSafe: User#update_attribute isn't safe because it skips validation

From the readme:

In my experience, even experienced Rails developers don't know which ActiveRecord methods skip validations and callbacks. Quick: which of decrement, decrement!, and decrement_counter skip which? (Hint: they're all different.)

do_not_want prevents you from using methods which bypass validations and callbacks, forcing you to use a subset of Rails whose behavior is relatively consistent and predictable. I would not ever want to use do_not_want personally; in fact, I used to have a bad habit of always using update_attribute deliberately to avoid validations, but if memory serves, I picked up that bad habit from some very good programmers. It's tedious to use type-checking when you're a fan of dynamic languages.

Nonetheless, I might inflict do_not_want on my subordinates, if I ruled some Dilberty cubicle farm with an iron grip, and the power was starting to go to my head. It's the type of authoritarian imposition which might save you a lot of aggravation later on.

So, because of that tension, and the Perl-like inconsistency of the implementation, I can't call the sometimes-optional type-checking which Rails bolts onto Ruby (except when it doesn't) an ideal design. But I think it's better than many alternatives. Java, and similar languages, feature mandatory type-checking of a painfully severe and tiresome nature.

main static void final class LoquaciousBoilerplate extends Whatthefuckever {
  public final overspecified Whatnot whatnot;

Technically, this does enforce composition at a very specific level, but it's much too specific. Building a system like Rails in a language like that would have been close to impossible. This Java/C/C++/ALGOL legacy of mandatory type-checking, for every single variable, just plain sucks.

The only mandatory elements of Rails's type-checking are the classes you can assign to association methods, and the "files must define classes with matching names" constraint which the infamous "expected foo.rb to define Foo" error represents. You see it less these days, thanks to Bundler, but any experienced Rails developer has seen that error countless times, even though very few have ever seen it happen because foo.rb actually failed to define Foo. That almost never happens.

Instead, the error-throwing code is lodged inside a module method for loading missing constants. So you see it whenever a constant is missing; it usually means you forgot to require some utterly unrelated file. The circumstances which throw the error have very little to do with the error message. I've only ever worked with one other language which had this constraint; it was Java, and it never threw an error like this.

But it's easy to explain this error's frequency, and its well-deserved universal reputation for total aggravating worthlessness. You're bound to end up with fragile, buggy code if you're building type-checking into the process of file-loading without modifying File or any I/O-related classes. It's almost like an ad hoc, informally-specified, bug-ridden implementation of half of Java.

Moving the type-checking / file-loading code out of Rails and re-implementing it at the language level (in Rubinius, for instance) would make for a much cleaner design, but wouldn't be practical. By building an incomplete but distinct language on top of Ruby, Rails makes it easy to use any arbitrary Ruby gem, instead of only being able to use gems built specifically for use with Rails.

The Dart language says it has an optional type-checking system:

Map<String, dynamic> m = {
  'one': new Partridge(),
  'two': new TurtleDove(),
  'twelve': new Drummer()};

This Map function can then return a Drummer for 'twelve' or a Partridge for 'one'. It's probably more accurate to call this a statically-typed language with a special command which allows you to go into dynamically-typed mode. It's opt-out where Rails validations are opt-in.

It's my hope that the inconsistent type-checking in Rails becomes consistent, and that the do_not_want gem becomes totally obsolete. I also think that future language designers should consider implementing non-database versions of belongs_to and similar methods as OOP fundamentals.



You can have some of this in Perl if you use Moose.