Monday, November 28, 2016

ERROR: Permission denied for relation

Stack Overflow will have you mess around with GRANT all day long.

But if you're seeing this error, you're probably doing dev on your laptop and you probably don't need to care any further than this:

ALTER USER myuser WITH SUPERUSER;

Enjoy.

Saturday, October 1, 2016

How Vim Shaped My Writing

Many years ago, when I was a more active blogger, another blogger did a word-frequency analysis of my blog vs other popular blogs in the same niche. They found I had something like three times as many unique words per post as my peers. So they concluded I was a bot, and created a GitHub repo to prove it.



At the time I considered it a compliment to my vocabulary, but what I've realized recently is that it's mostly just a side effect of Vim navigation. By far the most efficient method of getting around in Vim is to search for terms that you know to be unique. I've been using Vim a long time, so I picked up a habit of using as many unique words as possible in my writing.

Sunday, September 25, 2016

JavaScript Is Gradually Supervised Chaos

Imagine if, after Google graduated from tiny startup to megacorp, it had been replaced with a well-run public utility. This is a controversial idea in America, but public utilities can be pretty great. The public subway system in London, for instance, is fantastic. Compare that to private utilities like Comcast.

In this alternate universe, there's some kind of system which kicks in after the success of a startup. "Congratulations, you've made billions, you're rich now, but what you've created is so essential to our society that we can't risk it turning into another Comcast, so we're going to run it as a public service which every American gets for free." This would be a very different America.

There's this weird thing in the US where conservatives refuse to believe that the government can do anything well, but also are outraged if you criticize the military or the police. It's as if working for the government is a guarantee of incompetence, but working for the government and carrying a gun is a guarantee not only of competence but also moral superiority. I don't understand it.

But this is just a mental exercise, set in an alternate universe, so humor me. Imagine this alternate-universe America is like Switzerland, where public utilities are run so efficiently that trains arriving three minutes late constitute a political scandal.

So we're in this alternate universe where the US government takes over companies that become so essential to basic everyday life that we can't risk them turning into Comcasts. In this alternate universe, Comcast itself was taken over, and nobody ever has to deal with their shit. It's a different place, but it's a nice place.

The purpose of this mental exercise is to explain JavaScript. Every time a developer from some other language dives into JavaScript, they freak out at the chaos. You can import specific language features from a spec which has not yet been actualized. There's multiple different package managers. You use them to install each other. There's more than one require() syntax, but there's also really no require() syntax at all.

This chaos occurs because JavaScript, which was born in chaos anyway, operates by allowing its open source communities to develop solutions for failings in the language, and then folds some of those solutions up into the language itself. ES5 turned jQuery into querySelector; ES6 adopted () => from CoffeeScript. This folding-up necessarily lags behind the development of new solutions. And it works well enough that you often get a situation in JavaScript where multiple solutions which are pretty good compete with each other; npm vs Bower, for example, or CommonJS vs RequireJS (which will fade in importance once ES6's import becomes viable, but not disappear immediately).

Most language communities don't operate this way, but most language communities don't have the enormous size or reach that JavaScript does. Everything runs JavaScript today, and every programmer uses it. Go on a job ads site and search for "JavaScript" — you'll see ads for jobs in every programming language that exists, with a little bullet point that says "oh btw you also need javascript," far more often than you'll see jobs that are about JavaScript. Getting a group as colossal as the JavaScript user base to agree on hypothetical descriptions of their needs, or hypothetical solutions, would be incredibly difficult. Letting this massive "community" splinter into subgroups and compete is a better solution.

I'm not saying it isn't messy. I'm just saying there's a reason.

Thursday, September 22, 2016

Elm Is The New Rails

About 10 or 11 years ago, a friend asked me if I'd heard of Rails. She had a developer building something for her, and he wanted to build it in Rails. She asked me to look into it; she figured anything that could get the job done was fine with her, but she didn't know if Rails could get the job done. So I looked into it, and I was like, "holy shit." Then I told her, "yeah, that can get the job done," and aside from some dabbling in Node and front-end work, and a whole bunch of bootstrapped entrepenurial whatnot around information products, Rails has pretty much been my career ever since.

That last part's been true for a lot of people. I don't know if Elm will see the same massive success, but when I compare what it felt like to discover Elm vs. what it felt like to discover Rails, I see a lot of similarities. First, both Elm and Rails represent very curated experiences with a very strong emphasis on being enjoyable to use. Second, that curation involved deep thinking, in each case, about the best way for the framework to do its job, which thinking came in each case to idiosyncratic and (in my opinion) superior conclusions.

Third, they both showed up in very similar environments. Rails had two main sources of competition: an over-ambitious, platform-y technology which was painfully boring and tedious to work with, and an overly simple alternative which encouraged you to write very hacky code once your project grew beyond a certain size. Angular 2 would be the equivalent to J2EE in this analogy, and obviously React, as a view-only technology which intermeshes presentation markup with application logic, would be the PHP.

OK, I apologize. I know Angular intermeshes markup and logic too, and I know both have insane ambitions of being platform-agnostic mega-/meta-frameworks, and indeed, Elm itself has you write your HTML as Elm functions, so it too blurs the line between logic and markup somewhat. But I needed a J2EE and a PHP, and Angular is definitely the over-engineered one.

Only time will tell if these two analogies hold, and indeed I'm a lot more confident in the Angular snark than the React snark. I could be wrong about both. I built a toy recently with React and enjoyed it a lot (but then again, PHP makes a ton of sense for really simple barely-dynamic pages). And certainly, Evan Czaplicki doesn't seem like a DHH to me at all, although I could totally see him as a Matz.

Still, one of the weirdest aspects of Rails's success has been a particular contrast: everybody steals ideas and vocabularies from Rails, but virtually no other project has said, "yes, Rails is right, programmer happiness should be a top priority." You can find migrations, REPLs, code generation, and "REST means CRUD" in every language now, but you won't find "programmer happiness is the top priority" in many other places.

I think this is partly because a lot of the old blog posts where DHH praised and imitated Kathy Sierra have vanished, so nobody remembers that making users happy is about making them awesome, and a framework which makes its users awesome is doing what a framework should do. It could be that the "look at me, I'm awesome!" vibe of early Rails devs was hard to take, so nobody did any serious thinking about it. It might even just be because it's hard for anybody to take DHH seriously in the first place when he has such an egregious straw man problem.

However, one way or another, DHH told everybody that programmer happiness is the secret to Rails's success, and everybody gathered around wondering "what's the secret to Rails's success?" and virtually nobody ever considered the possibility that when DHH said it was programmer happiness, he was telling the truth. And this confused me, and it continued to confuse me for a full decade, and then I found Elm.

I have absolutely no idea if Elm's focus on being fun to use comes from any awareness of Rails at all. But Elm's designed to be fun to use, and, as a consequence, it's fun to use. What a shock! And as it turns out, everything that makes Elm fun to use also makes it a good framework. It has a clean mental model. It doesn't throw mysterious, aggravating runtime errors. Its compiler errors are easy to understand, easy to correct, friendly, and so polite that its British users assume the language itself is British. It's fast as hell in development and production — pull up a Chrome timeline sometime and compare an Elm "hello world" with the festival of document.write that is an Om "hello world" — and it's basically awesome.

I think my new rule of thumb should be that all of my code should be fun to use. It worked for Rails, and it's working for Elm too.

Thursday, September 15, 2016

God What An Awful Directory Name

One aggravation I have with the asset pipeline, and it's not the worst by far, is that app/assets/javascripts is just a terrible directory name.

First, if you're building anything remotely modern, your JavaScript code isn't really an asset, it's an app.

Second, "JavaScripts" is not appropriate here. These are not scripts written in Java. The only meaning the term "JavaScripts" could ever actually have in the English language is if you are talking about varieties or dialects of JavaScript. For instance, ES6 is a much more usable JavaScript than the JavaScript of 1997, but both these JavaScripts have some really weird edge cases, especially around numbers and numeric comparison.

If the directory were called app/assets/javascript-scripts, that would still be stupid, but it would at least have a meaning.

Unless you are, for some insane reason, storing ES7, ES6, ES5, and/or several other entire dialects of JavaScript in app/assets/javascripts, that plural is just wrong.

Update: and I forgot the most obvious reason!

Wednesday, July 13, 2016

Online Trolling Is A Mainstream Thing

This is a music video about misogynistic online trolls. The album this video comes from hit number 1 on the "alternative" Billboard chart. (Both the song and the album appear to have made it into the top 40, for those of you old enough to remember what that was.)



This is a late-night talk show where the last Republican nominee for President reads "mean tweets" (aka online trolling) including one written by the next Republican nominee.



What a time to be alive.

Tuesday, July 12, 2016

How Robotics Will Transform Hollywood

Before CGI, aka 3D animation and modeling, special effects meant practical effects: building little spaceships out of plywood, putting the camera really close to them to make them seem huge, and then setting them on fire. Or a giant, malfunctioning, mechanical shark. Filmmakers like Christopher Nolan still favor practical effects over CGI, because they give the actors something real to react to.



But 3D and CGI are in every movie, because they give filmmakers the ability to depict just about anything, and they've advanced to a degree where they look incredible. So they're not going anywhere.

But I think practical effects are going to come back in a huge way. Today, you don't have to build the little spaceship out of plywood. You can fill it full of tiny servos and sensors. You can make little walking tanks. You can do almost anything.

Animatronic effects and simple machines have been a part of filmmaking for a long time, but that whole process is probably about to become a lot more effective. So there's probably going to be a little practical effects renaissance in about 5 to 10 years, which will make movies feel more real again. It's probably starting already.

Thursday, June 30, 2016

Noise Engineering Loquelic Iteritas: Rough Exploration Video

I made another Eurorack video. Caveat: I mostly made it in my pajamas, and you have to be logged in to YouTube to see it, because there's swearing. It's a rough tour of a new oscillator I got, namely the Loquelic Iteritas from Noise Engineering.

TL;DR: aggro bass dream machine.

Thursday, June 23, 2016

Two Videos Explaining Eurorack Patches Built Around The Mutable Instruments Elements

I made a couple beginner-friendly videos to explain my own Eurorack patches, partly as a vlogger thing, partly so I would be able to re-create them later myself, and partly to improve my general skill at making Eurorack patches. Kind of like detailed note-taking, and kind of like the idea that teaching is the best way to learn.

The first one is a housey, proggy sound:



The second is a sound I think of as "Typhoon Kitten."



Both videos are mostly built around the Mutable Instruments Elements, a powerful little synthesizer which only sometimes does what I want it to.

Tuesday, June 21, 2016

How I Understand Donald Trump

The best way to understand Donald Trump, in my opinion, is to think about Ron Paul's problems in 2011.



Why did the media smirk whenever Ron Paul tried to take the stage, even though the crowd went wild?

Ever since the Southern Strategy was established, Republicans have been promising their constituencies Trumps and delivering something else instead. There was no way on earth this three-card monte trick was going to last forever.

Yet in 2011, Ron Paul won huge crowds, and when the media spoke of him, they smirked. They didn't see the impending doom of the Southern Strategy. They saw a joke. Why?

Because the media class controlled what could be discussed, and they ruled Paul out. He campaigned without their approval, and they thought that was hilarious. Television had controlled politics since the Nixon-Kennedy debates, and the idea that you could win without media support was ridiculous to the people who ran the media.

Those debates were a pivotal moment in American politics; for the first time, if you wanted to be President, you had to look good on television. But in 2008, Obama built a campaign machine which centered around social media. And today, Trump's success has more to do with social media, especially Twitter, than it has to do with television.

Trump's not a fluke; he's the second Ron Paul in a row. Both these candidates did better with social media than with traditional media, but one came before a major shift in the relative importance of these two categories of media, and the other one came after it. So, while TV and print could smirk about social media in 2011, it's Trump doing the smirking today, because social media now matters more than TV and print.

Or at least, it was Trump doing the smirking a few weeks ago. His campaign's not doing as well against Clinton as it did against other Republicans. I won't go any deeper into that, because I don't want to claim to predict the future.

But if you want to understand Trump, he's a lot less mysterious if you look at him as continuining what Ron Paul began, within his party, and continuining what Obama began, in terms of campaign tactics.

(Whether he knows he's continuining either of those things is another question. I doubt Trump's ego could handle acknowledging the reality that he's picked up on the tactics a black man pioneered, and that he's only able to pick them up today because they've become a lot simpler to use, and more readily available to less educated and intelligent people like himself. Hell, even putting aside his racism, his ego's probably so fragile that he couldn't even deal with the idea that he's doing something that Ron Paul did first.)

Update: I also like this theory.

Friday, June 10, 2016

Dudes Must Band Together To Prevent Human Cloning

A few times, when I lived in San Francisco, gay dudes checked me out and hit on me when I was walking down the street minding my own damn business.

Recently on Twitter, a rando explained my own joke to me.

Based on these two experiences, I am absolutely certain that the moment science locks down human cloning and makes it a viable, dependable technique, women are going to massacre every dude on earth and kill us all.

If you're a dude, please understand that our only hope of survival is to prevent human cloning.

Tuesday, May 3, 2016

Amazon: The Next (Next Microsoft)?

Tech has this weird, generational semi-imperialism, where a particular company seizes control of the platform everybody else needs and becomes "king" for a while, before fading into relative irrelevance when a new platform emerges. IBM and Microsoft both fit this pattern perfectly. Google and Facebook have arguably been contenders in more recent decades, except neither was really ever essential to developers.

Google search rankings have been crucial to businesses, and Facebook's got a somewhat frightening control over social interactions — and the business implications of that were enough to terrify Google executives into acts of pitiful desperation — but neither Google nor Facebook was ever actually essential to developers. They have both been arguably essential to businesses, but attempts to paint Google or Facebook as hegemonic tyrants in the 1980s/1990s Microsoft style don't really work, in my opinion, because while businesses do have an equivalent level of dependence on these platforms, developers don't.

So look at Amazon in that light. How many startups run on AWS? In 2015, AWS made Amazon almost as much as Amazon's retail operation did, and in 2016, Jeff Bezos expects it to hit $10B (about three times as much as Amazon retail).

Right now, being able to run all your infrastructure on Amazon is kind of awesome, although not without challenges. But if the last decade or two have disproved (or at least provided a counterexample to) this idea that tech's history consists mostly of cycles of platform domination, the 2020s might be a strong example in favor of the theory, with Amazon in control.

Tuesday, April 19, 2016

Rails, RSpec, Poems, And Synthesizers

I've been re-watching Gary Bernhardt's classic series of screencasts Destroy All Software, in part because I'm eagerly anticipating the new edition, Destroy All Software: Civil War. In this edition, David Heinemeier Hansson will face down Bruce Wayne, and everybody will have to pick a side. I'm really looking forward to it. I think, also, that Luke turns out to be his own cousin, or something, but a) I think that's just a rumor, and b) if you know, don't tell me, because spoilers.

Anyway, there's a screencast which covers conflicts between the existing "rules" of object-oriented programming, specifically, inherent conflict between Tell Don't Ask and the Single Responsibility Principle. I'm into this topic, because my book Rails As She Is Spoke is mostly about similar conflicts.

One interesting thing that comes up in this screencast, mostly in passing, is that Rails enthusiastically embraces and encourages Law of Demeter violations. In fact, if you build Rails apps, you've probably seen a line of code like this now and then:

@user.friends.logged_in.where(:last_login < 4.days.ago)

This code subordinates the Law of Demeter to the Rule Of Thumb That Rails Code Should Look Like Cool Sentences. Lots of other things in Rails reveal this same prioritization, if you look at them closely. In fact, when Mr. Hansson wrote a blog post about concerns, he explicitly stated it:
It’s true that [avoiding additional query objects in complex model interactions] will lead to a proliferation of methods on some objects, but that has never bothered me. I care about how I interact with my code base through the source.
It's extremely tempting to laugh this off. "Wow, this guy prefers pretty sentences to considering the Law of Demeter, what a n00b." And I am definitely not going to endorse that blog post, or the idea of concerns, across the board. But I also think laughing off DHH's priorities here would be a mistake.

Consider RSpec, for the sake of comparison. RSpec prioritizes a sentence-y style of code, which tries hard to look like English, over just about any other consideration, as far as I can tell. And RSpec has an Uncanny Valley problem. This code has both an Uncanny Valley problem, and a Law of Demeter problem:

@user.should_receive(:foo).with(:bar).and_return(:baz).once

By contrast, it's very interesting that Rails only has Law of Demeter problems, when it does the same kind of thing. The Rails valley is not uncanny at all. When it tries to make Ruby look like English, it stops a little earlier than RSpec does, acknowledging the fakeness and the Ruby-ness of the "English," and in so doing, you end up with code which is English-like enough to be incredibly convenient and easy to read, but not so overly-trying-to-be-English that you can't reason about its API and are forced to memorize everything instead.

Rails encourages specific Demeter violations as a set of special, privileged pathways through unrelated objects and/or objects which exist only to serve as those pathways in the first place. And it works. I'm not saying Rails is perfect — if you've read my book, or indeed ever read anything I've written about Rails since about 2011, then you know I don't think that — but I don't think its cavalier attitude towards the Law of Demeter would even make it onto a top ten list of things I want to change about Rails.

Of course, the whole point of that screencast I mentioned, which points out that the "rules" of OOP conflict with each other from time to time, is that these rules are not rules at all, but merely guidelines. So it's no surprise that they involve tradeoffs. What is surprising is that I don't think there's any real name for what Rails chooses to prioritize over Demeter, except perhaps "readability."

Frankly, it's moments like this when I feel privileged to have studied the liberal arts in college, and where I feel sorry for programmers who studied computer science instead, because there's no terminology for this in the world of computer science at all. Any vocabulary we could bring to bear on this topic would be from the worlds of literature, poetry, and/or language. I know there's a widespread prejudice against the liberal arts in many corners of the tech industry, where things like literature and poetry are viewed as imprecisely defined, arbitrary, or "made up," but every one of those criticisms applies to the Law of Demeter. It's not really a law. It's just some shit that somebody made up. Give credit to the poets for this much: nobody ever pretended that the formal constraints for haikus or sonnets are anything but arbitrary.

Let's look again at our two lines of example code:

@user.should_receive(:foo).with(:bar).and_return(:baz).once # no
@user.friends.logged_in.where(:last_login < 4.days.ago) # ok


If you were to write one of these lines of code, it would feel like you were writing in English. The other line could function as an English sentence if you changed the punctuation. But what's interesting is that these two statements don't apply to the same line.

This one feels harder to write, yet it functions almost perfectly as English:

@user.should_receive(:foo).with(:bar).and_return(:baz).once
# "user should receive foo, with bar, and return baz, once"

Writing this one feels as natural as writing in English, but falls apart when treated as English:

@user.friends.logged_in.where(:last_login < 4.days.ago)
# "user friends logged in where last login less than four days ago"

These are extremely subjective judgements, and you might not agree with them. Maybe the RSpec code isn't such a good example. What I find difficult about RSpec is remembering fiddly differences like should_receive vs. should have_selector. I'm never sure when RSpec's should wants a space and when it wants an underscore. Why is it should have_selector, and not should_have_selector? Why is it should_receive, and not should receive? RSpec has two ways to connect should to a verb, and there doesn't seem to be any consistent reason for choosing one or the other.

In actual English grammar, there are consistent connections between words, whereas with RSpec, you kind of just have to remember which of several possible linkage principles the API chose to use at any given moment. To be fair, English is a notoriously idiosyncratic language full of inconsistencies and corner cases, so writing RSpec might actually feel like writing English if English isn't your first language. But English is my first language, so for me, writing RSpec brings forth a disoriented sensation that writing English does not.

(Tangent: because I'm a first-generation American, and England is "the old country" for me, English is not only my first language, but my second language as well.)

Anyway, the question of why Rails feels more natural to me than RSpec — and I really think it's not just me, but quite a few people — remains unanswered.

There is another way to approach this. This is an analog synthesizer:



These machines have a type of thing called envelopes.



Briefly, an envelope is way to control an aspect of the synthesizer. This synth has one envelope for controlling its filter, and another for controlling its amp (or volume). It doesn't matter right now what filters and amps are, just that there are two dedicated banks of sliders for controlling them. Likewise, it doesn't matter how envelopes work, but you should understand that there are four controls: Attack, Decay, Sustain, and Release.

Now look at this synthesizer:



The envelope controls are much more compact:



This synthesizer again has one envelope for its filter, and another for its amp. But this synthesizer wants you to use the same single knob not only for each envelope, but even for each of the four parameters on each envelope. Where the other machine had eight sliders, this machine has one knob. You press the Attack button in the Filter row to make the knob control the filter attack. You press the Release button in the Volume row (as pictured) to make the knob control the amp release. (And so on.)

Do hardware engineers have a word for this? If they do, my bad, because I don't know what it would be. User experience designers have a related word — affordances, which is what all these controls are — but I don't know if they have a word for when you dedicate affordances on a per-function basis, vs. when you decide to double up (or octuple up). It is, again, a tradeoff, and as far as I can tell, it's a tradeoff without a name.

But it's the same basic tradeoff that Rails and RSpec make when they pretend to be the English language, and somehow, Rails gets this tradeoff right, while RSpec gets it wrong. When I need to recall a Rails API which mimics English, it's easy; when I need to recall an RSpec API which mimics English, there's a greater risk of stumbling. With should_receive vs. should have_selector, the relationship between the API's affordances and its actions is out of balance. RSpec here has the opposite problem from the synthesizer with one knob for every envelope parameter. Here, RSpec's got an extra affordance — using an underscore vs. using a space — which has no semantic significance, but still takes up developer attention. It's a control that does nothing, but which you have to set correctly in order for the other controls to not suddenly break. Rails, by contrast, has a nice balance between affordances and actions in its APIs.

Sunday, April 10, 2016

The Fallacies Of Distributed Coding

If you only ever write code which runs on one machine, and only ever use apps which have no networked features, then computers are deterministic things. It used to be a given, for all programmers, that computers were fundamentally deterministic, and thanks to the internet, that just isn't true any more. But it's not just the rise of the internet, which its implicit mandate that all software must become networked software, which has killed the idea that programming is inherently deterministic. Because everybody's code became a distributed system in a second way.

If you write Ruby, your code is only secure if RubyGems.org is secure. If you write Node.js, your code is only secure if npmjs.com is secure. And for the vast majority of new projects today, your code is only secure if git and GitHub are secure.

Today "your" code is a web of libraries and frameworks. All of them change on their own schedules. They have different authors, different philosophies, different background assumptions. And all the fallacies of distributed computing prove equally false when you're building applications out of extremely modular components.
  1. The network is reliable. This is obviously a fallacy with actual networks of computers, but "social coding," as GitHub calls it, requires a social network, with people co-operating with each other and getting stuff done. This network mostly exists, but is prone to random outages.
  2. Latency is zero. The analogy here is with the latency between the time you submit a patch and the moment it gets accepted or rejected. If you've ever worked against a custom, in-house fork of a BDD library whose name.should(remain :unmentioned), because version 1.11 had a bug, which version 1.12 fixed, but version 1.12 simultaneously introduced a new bug, and your patches to fix that new bug were on hold until version 1.13, then you've seen this latency in action, and paid the price.
  3. Bandwidth is infinite.
  4. The network is secure. Say you're a law enforcement agency with a paradoxical but consistent history of criminality and espionage against your own citizens. Say you try to get a backdoor installed on a popular open source package through legal means. Say you fail. What's to stop you from obtaining leverage over a well-respected open source programmer by discovering their extramarital affairs? I've already given you simpler examples of the network being insecure, a few paragraphs above. I'm hoping this more speculative one is purely hypothetical, but you never know.
  5. Topology doesn't change.
  6. There is one administrator.
  7. Transport cost is zero. Receiving new code updates, and integrating them, requires developer time.
  8. The network is homogeneous.
Open source has scaled in ways which its advocates did not foresee. I was a minor open source fan in the late 1990s, when the term first took hold. I used Apache and CPAN. I even tried to publish some Perl code, but I was a newbie, unsure of my own code, and the barriers to entry were much higher at the time. Publishing open source in the late 1990s was a sign of an expert. Today, all you have to do is click a button.

The effect of this was to transform what it meant to write code. It used to be about structuring logic. Today it's about building an abstract distributed system of loosely affiliated libraries, frameworks, and/or modules in order to create a concrete distributed system out of computers sending messages to each other. The concrete distributed system is the easy part, and people get it wrong all the time. The abstract distributed system is an unforeseen consequence of the incredible proliferation of open source, combined with the fact that scaling is fundamentally transformative.

Wednesday, March 23, 2016

Reddit & Hacker News: Be A Non-User User

When Ellen Pao was forced out of Reddit by a horde of angry misogynists, I deleted all my Reddit accounts. But I ended up going back to Reddit for its /r/synthesizers subreddit. I've been making music with synths my whole life, but last summer, I taught a class on it, so I wanted to do some extra research.

Soon after, I discovered /r/relationships, where so many people spend so much of their time talking young women out of relationships with older abusive men that the subreddit might as well be called /r/abusepreventionvolunteerstargetingaveryspecificageprofile (except for all I know that might exist too). They help abused men get out of danger, too, and you do see the occasional abusive relationship where both parties are roughly the same age, but for some reason, 9 times out of ten, it's a naive 23-year-old woman dating an abusive 37-year-old man. There's a colossal irony in this: a site which is famously overrun with misogynists also hosts a fantastic resource for abused women.

At first I read this subreddit as a guilty pleasure, thinking that nothing could be more hilarious than the type of idiot who looks to Reddit for relationship advice. But when I discovered this theme, I realized this subreddit was doing a good thing. It's a force for good in the world, or whatever.

So I read these subreddits occasionally, and others, without ever logging in. I can't log in, because I deleted my accounts, but I'm glad I did, because reading Reddit without logging in is much, much more pleasant than being a "user" of the site in the official sense. The same is true of Hacker News; I don't get a lot out of logging in and "participating" in the way that is officially expected and encouraged. Like most people who use Hacker News, I prefer to glance at the list of stories without logging in, briefly view one or two links, and then snark about it on Twitter.

Let's actually compare these use cases from the perspective of behavioral economics. The upvote/downvote dynamic is one which incentivizes groupthink and long conversations. So if you go on Reddit or Hacker News, you see groupthink and long conversations. Twitter's prone to hate mobs and abuse, but if you're just doing a brief bit of snark on there about some random link from Hacker News, you're probably experiencing Twitter's happy path, which encourages snippets of decontextualized wit. The decontextualization turns out to be incredibly important. Decontextualization is why snarking about Hacker News on Twitter is a better user experience for discussing Hacker News stories than logging into Hacker News.

On Hacker News, if you say something people hate, they can downvote it, and if you say something they like, they'll upvote it. When you see your own post, it's ranked in a hierarchy next to other stuff people said, which is also ranked in that same hierarchy. On Twitter, you can get retweeted or starred/heart-ed, but tweets kind of just float randomly through time. You can find out if people love your tweet or hate it, but you don't get direct comparison to other remarks on the same topic, which is great. That direct comparison is a terrible feature. On a site which incentivizes groupthink, if you're the top post, you're almost guaranteed not to have the best insight. Good insights don't survive groupthink.

If you do have the best insight, you can calculate the stupidity of the group as a whole by measuring how far your post is from the top. But it's rarely so linear. Usually, a Hacker News thread will have a ton of bullshit, plus some good insights here and there. The best insights are usually about midway through the hierarchy, or near the bottom half of the middle section, which says to me that the audience as a whole is more stupid than smart, but also often contains smart people.

Everything I'm saying about Hacker News is true for Reddit in theory as well. You can definitely see it on programming subreddits all the time. In fact, programming subreddits are even worse than Hacker News. But Reddit serves a much larger and more diverse audience, and some subreddits (as I mentioned above) do remarkably good things with it, despite its fundamental design flaws. The audience is an important factor; Lobsters gets a lot of mileage out of being nothing but an HN clone with less idiots.

Long story short, the best way to use these sites is to never log in. And if you're building social software, you should really think about this.

First, your sites don't exist in isolation. In the prehistoric epoch when Hacker News was created, there was this idea that sites were self-contained universes. Today we know that a lot of the people who talk to each other on Hacker News are talking to each other on Twitter as well. So you have to keep in mind that even when people want to talk about the stuff they find on your site, it's unrealistic to assume that your site would be the only place they'd go to talk about it. If they have to create a login, but they already have logins on several social networks, a person with a login is not just a "user," they are a user who, in addition to finding reasons to use the site, also and subsequently found a reason to log in.

Second, neither of these sites seems to acknowledge that using the sites without being a "user" is a use case which exists. In the terms of both sites, a "user" seems to be somebody who has a username and has logged in. But in my opinion, both of these sites are more useful if you're not a "user." So the use case where you're not a "user" is the optimal use case. Conceptual blind spots often birth silly paradoxes.

Tuesday, March 22, 2016

An Unsolvable Math Problem Inherent To Space Vinyl

One thing science fiction authors often fail to consider is metabolic differences in perception of time. There are a subset of human musical creations which are very similar to the music of birds, just much slower. That same subset is also very similar to the music of whales, just much faster. Birds are tiny creatures with extremely rapid heart rates and their songs are incredibly fast; whales are giants with slower heart rates and their songs are incredibly slow.

When you look at three different types of animals — ourselves, whales, and birds — that all discovered melody, you can see a universality there, but it's actually easier to see it than to hear it. If you transcribe whalesong or birdsong into Western musical notation, its similarity to various forms of human music becomes easier to discern.

The most obvious science fiction consideration here came and went, without anybody handling it reasonably, fifty years ago. We sent a metal record, like a vinyl record but made out of metal, out into space, in the hopes that any creature which found it one day might be able to learn of human music. We included an instruction manual, but even so, what did we know about how an alien who found the record would perceive music which was optimized for our physical size, our heart rate? If an alien similar to a whale finds this music, they will think it incomprehensibly fast; if an alien similar to a bird finds it, they will experience it as bizarrely slow.

Or to be even more obvious, consider that human music is necessarily optimized for the range of audio frequencies that human ears can perceive. Even age differences among humans are enough to cause different perception capacities here. There are frequencies which teenagers can hear, but adults can't, and convenience stores sometimes use this fact against teenagers. They'll play loud sounds in those frequencies to annoy the teenagers without adults even being aware of it, to stop the teenagers from treating their stores as places to hang out. Obviously, if aliens one day discover this metal record floating in space, and they figure out how to build the record player from our oddly IKEA-like diagrams, there's no guarantee that their ears will be able to perceive the same range of frequencies, or indeed that they will even have ears in the first place.

There's a type of extremely complex and multi-dimensional ratio here which we have no way to estimate ahead of time; the ratio of the speed at which sound changes over time in a given piece of music, vs. the speed at which sound normally changes over time, in the perception of listeners of a particular species.

So my big hope for that space record is that, thousands of years ago, aliens abducted a small number of us and transplanted us to a new planet just to see what would happen; that those human beings thrived on this other planet, with no knowledge of our own planet except a few half-forgotten legends; and that these "alien" humans will be the ones to discover that record. Not only is it the most likely way to expect that the music on there will ever be truly heard, it would also be a very weird and exciting experience for the people who found it.

Sunday, February 21, 2016

How Computers Feel Different Now

I learned how to program a computer on a TRS-80, in BASIC. I was six years old. At the time, "computers" meant things like the TRS-80. Today, your phone is a computer, your TV's a computer, your car's made of computers, and, if you want, your frying pan can be a computer.

But it's not just that everything's a computer now; it's also that everything's on a network. Software isn't just eating the world because of Moore's Law, but also because of Metcalfe's Law. In practice, "software is eating the world" means software is transforming the world. It might make sense to assume that software, as it transforms the world, must be making the world more organized in the process.

But if Moore's Law is Athena, a pure force of reason, Metcalfe's Law is Eris, a pure force of chaos. Firstly, consider the fallacies of distributed computing:
  • The network is reliable
  • Latency is zero.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology doesn't change.
  • There is one administrator.
  • Transport cost is zero.
  • The network is homogeneous.
The first and the last — "The network is reliable" and "The network is homogenous" — are basically equivalent to saying "chaos reigns supreme." No area is ever the same, because the network is not homogenous (and the topology is ever-changing), and things don't always happen the same way they happened before, because the network isn't always there. So chaos reigns over both space (the non-homogenous network) and time (the ever-changing network which is only sometimes there).

Chaos also reigns in a social sense: the network isn't secure, and there are many administrators. So if Moore's Law makes everything it touches more automatic and organized, Metcalfe's Law makes everything it touches less reliable and more unpredictable. An unspoken assumption you can see everywhere is that "software is eating the world" means that the world is becoming more organized along the way. But since networking is an implicit fundamental in the definition of software today, everytime software makes the world more organized, it brings networking along with it, and networking makes everything more chaotic.

Everything that software eats becomes newly organized and newly chaotic. Because you have a new form of organization replacing an old form of organization, while a new form of chaos replaces an old form of chaos, it's impossible to really determine whether or not software, when it eats the world, makes it more organized or more chaotic. The net effect is impossible to measure. You might as well assume that they balance perfectly, and Moore's Law and Metcalfe's Law are yin and yang.

But the thing is, when personal computers were a new idea, they emanated order. You typed explicit commands; if you got the command perfectly right, you got what you wanted, and it was the same thing, every time. They didn't have the delays that you get when you communicate with a database, let alone another computer on an unreliable and sometimes absent network. They didn't even have the conceptual ambiguity that comes with exploring a GUI for the first time.

Even the video games back then were mostly deterministic. It's why big design up front looks so insane to developers today, but made sense to smart people at the time. During WWII, the cryptographers who developed computing itself were mathematicians who based everything about computing on rock-solid, Newtonian certainties. You did big design up front because everything was made of logic, and logic is eternal.

This is no longer the case, and this will never be the case again. And this is what feels different about computers in 2016. A few decades ago, "non-deterministic computer" was a contradiction in terms. Today, "non-deterministic computer" is a perfect definition for your iPhone. Everything it does depends on the network — which may or may not exist, at any given time — and you can only use it by figuring out a graphical, haptic interface which might be completely different tomorrow.

Every Netflix client I have operates like a non-deterministic computer. Here's a very "old man yells at cloud" rant. This happened. I go on Netflix, and I start watching a show. There's some weird network glitch or something, and my Apple TV restarts. I go on Netflix a second time, and I go to "previously watched," but the Apple TV didn't tell the network in time, so Netflix doesn't know I was watching this show. So I go manually search for it, and when I hit the button to watch it, Netflix offers me the option of resuming playback where I was before. So it knows I was watching it, now.

Basically, whatever computer cached the list of previously watched shows didn't know the same thing that the computer which cached the playback times did know.

A few decades ago, it was impossible for a computer to have this problem, where the right hand doesn't know what the left hand is doing. Today, it's inherent to computers. And this has long-term consequences which are subtle but deep. Kids who see chaos as an intrinsic element of computing from the moment they're old enough to watch cartoons on Netflix are not going to build the same utopian fantasies that you get from Baby Boomers like Ray Kurzweil. My opinion of transhumanists is that they formed an unbreakable association between order and computers back when networks weren't part of the picture, and they never updated their worldview to integrate the fundamental anarchy of networks.

I don't want to old man yells at cloud too much here. That's where you get these annoying rants where people think the iPad is going to prevent kids from ever discovering programming, as if Minecraft were not programming. And I'm already telling you that the kids see the world a different way, like I'm Morley Winograd, millennial expert. But there's a deep and fundamental generation gap here. Software used to mean order, and now it just means a specific blend of order and chaos.

Wednesday, February 10, 2016

Theory: In Fiction, Curiosity Is Equal With Conflict

You've probably seen this talk, from a few years ago:



I've come to the conclusion that curiosity is as important as conflict in storytelling.

First, consider genre fiction. What would the British murder mystery be, without curiosity? Or consider what William Gibson said:
I wanted the reader to feel con­stantly somewhat disoriented and in a foreign place, because I assumed that to be the highest pleasure in reading stories set in imaginary futures.
Mysteries run on the "whodunnit" question. Science fiction runs on a more ambient curiosity, diffused to the setting rather than localized in a very specific piece of the plot. You're constantly trying to find out how this future setting differs from your relatively mundane reality. Curiosity drives horror fiction as well; imagine a horror story which started out like this:
There's a very specific type of monster that a lot of people don't know about. It's invulnerable to bullets, so shooting at it won't help you, but it's vulnerable to fire, so if you set it on fire, you'll be fine. It's nocturnal, so you might not be able to tell how big it is when you see it; fortunately, we can tell you that it's about eight feet tall, but only weighs about a hundred pounds. It attacks seemingly random individuals on a seemingly random schedule. However, there's a simple principle which allows you to predict whom it will attack, and when.
That would not be an effective horror story. It's more like an animal control manual. Every time you get the facts, the monster gets less scary. When the attacks seem random, that's terrifying. When you can call them ahead of time, they're not. This fundamental fact is the reason why horror video games can degrade into action video games which merely have unsettling artwork: once you understand the monster's mechanics, it's less of a monster, and more just an ugly problem.

The way horror uses curiosity sits in the middle between the very diffuse way sci-fi uses curiosity, and the very concentrated way mystery uses it. With mystery, you want an exact piece of the plot. With sci-fi, you want the world around the story. And with horror, you never find out enough information that you can imagine solving the problem until the characters are trapped in a situation where they wouldn't have access to the solution. But all three of these genres require unanswered questions to operate.

Everything I've ever read on narrative has said that conflict's essential. I've never seen anything which acknowledged the role of curiosity. Never any mention of the balancing act you have to play between revealing too little or too much.

The thing that made me absolutely certain that curiosity is as fundamental and essential as conflict was the television adaptation of The Expanse. As an avid fan of the books, I enjoyed the first few episodes despite their many flaws, but grew more and more frustrated with the show's inferiority to the books. I re-read the first two books just to get the taste of the show out of my mouth, and then I began re-reading the first book again.

This time, I've set up a spreadsheet and I'm filling it out chapter by chapter. The spreadsheet tracks what questions are raised in each chapter, what questions are answered, and — perhaps most importantly — what question each chapter ends on. Because in my re-reading, I noticed that "end on a question" seems to be a core organizing principle in these books. Most chapters end on cliffhangers, and a chapter which doesn't end on a cliffhanger will still at least end on a question.

There's also a column in my spreadsheet for "box within a box," because — to use JJ Abrams's term — The Expanse series of novels doesn't just have you constantly wondering "what's in the box?" Nearly every time you find out, in these books, what you find inside the box is almost always another box. And you usually find that box inside another box right at the end of a chapter. The books switch protagonists on a chapter-by-chapter basis, and every chapter opens by addressing some of the questions raised in the last chapter which "starred" that particular protagonist. Chapters also typically answer a previous question, then raise a low-stakes question, and then open up several new questions, amping up the stakes until they get to a cliffhanger, at which point the chapter ends.

It's a very addictive experience, and it's a cycle which continues throughout the book. The Expanse novels use these Matryoshka stacks of boxes within boxes as a propulsion mechanism, driving you from the end of one chapter into the beginning of the next, making these books extremely difficult to put down. Typically, when a new Expanse novel comes out, I read the whole thing in less than a day, putting aside just about everything else in my life.

I don't write as much fiction as I'd like, so I probably won't have time to apply this insight until 2017. But whatever I write next is going to steal a simple rule from The Expanse: end every scene on a question.

Is Twitter Optimizing For Users Who Even Exist?

A widely dreaded new Twitter feature became a reality today, but it's optional.
You follow hundreds of people on Twitter — maybe thousands — and when you open Twitter, it can feel like you've missed some of their most important Tweets. Today, we're excited to share a new timeline feature that helps you catch up on the best Tweets from people you follow.

Here's how it works. You flip on the feature in your settings; then when you open Twitter after being away for a while, the Tweets you're most likely to care about will appear at the top of your timeline – still recent and in reverse chronological order.
It's good to see that Twitter's notoriously ever-changing and tone-deaf management is listening, a little, for a change. But there are obviously better things Twitter could be doing with its energy here, and by Twitter's own reasoning, this only solves problems for a subset of its user base:
You follow hundreds of people on Twitter — maybe thousands — and when you open Twitter, it can feel like you've missed some of their most important Tweets.
How big is that subset? Who has this problem?

Let's assume for the sake of argument that "important Tweets" is even a meaningful phrase, that a tweet which is important can exist, and that capitalizing "Tweet" is an honest example of clear writing. It's obviously a deliberate attempt to avoid trademark genericization, but let's just pretend that anyone else but Twitter employees, anywhere in the universe, ever capitalizes "tweet," for the sake of argument, and further assume that "important Tweets" are a thing which really exist.

Let's give Twitter all this bullshit that they're trying to get away with, and then just ask: does their argument even make sense under its own false assumptions? Who out there is bummed that they missed an important "Tweet" because they follow thousands of people on Twitter?

Friday, January 15, 2016

Twitter, The Invisible Razorblade Tornado

I got into a Twitter argument with somebody today because I tweeted a link to their tweet, with some commentary. Not going to link up any more details, because I don't want to have random Twitter fights. But the commentary was mild, acknowledging a minor point of contention which might have been raised by a subset of my followers. It existed purely to fend off these minor points of contention. It might have done that job; I don't know. I do know that the person who posted the original tweet took it as an attack of some kind, and responded furiously with I think four new tweets going into incredible detail about how she didn't owe anybody a list of caveats and exceptions in the context of a 140-character microblogging format. While this assertion was true, it also seemed batshit insane.

Here's the thing.

This person was a black woman, and (as you already know) she was posting opinions on the Internet. I've never been a black woman posting opinions on the Internet myself, but everything I've ever read on the subject, by those who have had that experience, strongly suggests that being a black woman posting opinions on the Internet means you encounter ferocious, hateful criticism with literally every tweet you make. So even though this woman's response seemed batshit insane to me, in the context of how she interacted with my tweets, it was probably a completely reasonable misunderstanding on her end. It was a batshit insane way to interact with me, in my opinion, but I very strongly suspect that it was a completely reasonable way for her to interact with her timeline.

There are two things to think about here. The first is that, when you build social software, you're building proxy objects that people interact with instead of interacting with human beings. The second is that Twitter's failure to fully consider the consequences of this fact have led Twitter to become an automatic gaslighting machine. Step one, you're subjected to ferocious hatred. Step two, you encounter a very mild point of disagreement. Step three, in context, this mild disagreement looks like more ferocious hatred, so you respond, quite reasonably, with fierce defiance. Step four, the person who mildly disagreed is now receiving a wildly disproportionate degree of fierce defiance for no readily apparent reason. So they decide that you're fucking nuts. Step five, they tell you you're fucking nuts.

Boom. You have now been gaslighted by a completely sincere and previously disinterested individual who, up to the time of the unintentional gaslighting, harbored no ill intention towards you whatsoever. And this cycle repeats all the fucking time. In this way, if you're subject to harassment on Twitter, Twitter's terrible lack of insight into its own social affordances automatically converts random disinterested people into a crowd of gaslighters.

So.

If you encounter this kind of seeming paranoia on Twitter, please keep in mind, you may be communicating with a completely reasonable person who is trapped inside an invisible tornado of razorblades (with apologies to Adam Wiggins, who used to have a blog with the same name, and who I'm stealing a phrase from). Obviously, this is a story of how I failed to resist the incentives that drive this terrible automatic gaslighting machine, and became part of the problem, rather than part of the solution. But I hope it can serve as some kind of mea culpa, and some kind of warning or cautionary tale, both for anybody else on Twitter, and anybody else in the business of creating software. The past few years have really demonstrated that failing to think through the social affordances of a platform, and failing to listen to your users when they report unintended side effects, can have absolutely terrible consequences.

Tuesday, January 12, 2016

Depression Quest

The award-winning game that sparked a bazillion sea lions is, at least in its web incarnation, a beautiful little experiment, a throwback to the mid-90s, before the dot-com hustle began in earnest - the days of alt.adjective.noun.verb.verb.verb, when the web was spare and tiny, yet filled with bizarre experiments blurring the lines between poetry, fanzines, and hypertext. The thing it reminds me most of is Carl Steadman's placing.com, which was a weird sort of requiem for a failed relationship, in the form of an alphabetical catalog.

It also vaguely reminds me of the small interactive fiction scene, which started with text games like Adventure and Zork, and still continues today with fun little toys like Lost Pig (And Place Under Ground), where you play a dim-witted orc who wanders into a dungeon by accident, or the Machiavellian Varicella, where you play a Venetian palace bureaucrat out to seize control of a kingdom.

It's virtually impossible to escape awareness of the weird festival of hatred and threats which accreted around the main developer of Depression Quest, yet it's actually quite easy for the game itself to sail completely under the radar. This is kind of backwards, to say the least. If you're interested in this kind of thing, it's worth it to play the game for a minute.

It's kind of just a Choose Your Own Adventure with some musical accompaniment and some very simple, primitive stats relating to your depression: how severe it is, whether you're taking any medications for it, whether or not you're in therapy, and what effect the therapy is having. The text is kind of enormous.

At first, I did my best to read every word, and make choices in character. The depression got worse, and there's a lovely sincerity to the game, which, unfortunately, meant that the character's in-game hopelessness started seeping into me in real life, as the player. So I switched strategies, skimmed the text, and made my choices not based on how I felt the character would react, but what seemed like the right thing to do. My reasoning was, "fuck it, this is going to be negative, might as well get through it feeling good about how I handled it."

That, of course, might actually be the point of the game. Every time I did it, the depression eased up. Winning the game is actually really easy - do the right thing, even if it seems like it'll be hard or risky for the character.

I wrestled with depression during my teens and early 20s, although I don't know if it ever got as severe as true clinical depression. Maybe it was this memory, maybe it was the writing, maybe it's just the years of acting classes turning me into someone very emotional, but I actually had a hint of tears in my eyes when I got to the end of Depression Quest and won.

Certainly, this game is not for everybody, as a variety of intense overreactions (to say the least) have already very conclusively shown, and on a programming level, all it really consists of is text and links. However, if you like good writing, it's pretty great, in a small and modest way.

Monday, January 4, 2016

Paul Graham Doesn't Write Essays

The noted weenis Paul Graham wrote a pair of blog posts yesterday which have seen celebrated, accurate, and well-deserved rebuttals. But nearly every person who disagreed with Mr. Graham has persisted in indulging the man's pretensions, by referring to his blog posts as essays. Even people who urged Mr. Graham to check his truly towering and gigantic levels of privilege accorded him the privilege of referring to his blog posts as essays.

He does not write essays. And Mr. Graham has enough privilege. You don't need to afford him even more. Stop fucking doing it.

Paul Graham first caught attention with his writing by publishing a book of what were arguably essays. At least, the book had a bunch of chapters, and no predominant theme, so calling it a book of essays was good enough. In this book, he included a chapter called The Age of the Essay, in which he argued that his style of writing would come to define our age (which I sadly must admit might be true) and further that his chapters were essays (which is questionable). He never published another book of essays, but he later began referring to his subsequent inferior, rambling blog posts as essays as well.

I'm willing to concede that the chapters in his book, Hackers and Painters, were indeed essays. It might be true, and I'm happy to call it close enough. But in referring to his blog posts as essays, I first noticed how dishonest Mr. Graham was being when I prepared a dramatic reading of his worst writing ever, the blog post Let The Other 95% Of Great Programmers In. This blog post is absolutely not an essay, by Mr. Graham's own definition.

In The Age of the Essay, Graham argues that schools teach you to only write essays about English literature, rather than just about any topic. (I'm very glad to say that this was certainly not true of my education.) He then continues:
The other big difference between a real essay and the things they make you write in school is that a real essay doesn't take a position and then defend it...

Defending a position may be a necessary evil in a legal dispute, but it's not the best way to get at the truth...

The sort of writing that attempts to persuade may be a valid (or at least inevitable) form, but it's historically inaccurate to call it an essay. An essay is something else...

Essayer is the French verb meaning "to try" and an essai is an attempt. An essay is something you write to try to figure something out.
In Let the Other 95% of Great Programmers In, Mr. Graham takes a position and defends it. There is not the slightest hint of exploring a question or trying to figure anything out. He knew what his conclusion would be, and he made an argument. That blog post was a polemic, not an essay.

Note also that a polemic on a blog is usually called a fucking blog post, not a polemic, because most motherfuckers don't even know what the word polemic means.

The two posts he wrote recently, which pissed so many people off, were not essays either. They were very obvious propaganda pieces.

And when somebody writes a propaganda piece on their blog, you might, in a subtle analysis, refer to it as a propaganda piece, but your default term for it should be fucking "blog post."

BECAUSE THAT'S WHAT IT IS.

This person is a BLOGGER. He asserts an undeserved and arrogant level of privilege when he asks you to speak of the essays on his blog as essays, rather than blog posts. But that's just being rude, not dishonest. When he blogs polemics and propaganda pieces, and asks you to refer to his polemics and propaganda pieces as if they were essays, even when they are not — EVEN BY HIS OWN DEFINITION — then you are just handing away shit-tons of privilege to somebody who already has far more than enough.

STOP DOING IT.