Sunday, February 21, 2016

How Computers Feel Different Now

I learned how to program a computer on a TRS-80, in BASIC. I was six years old. At the time, "computers" meant things like the TRS-80. Today, your phone is a computer, your TV's a computer, your car's made of computers, and, if you want, your frying pan can be a computer.

But it's not just that everything's a computer now; it's also that everything's on a network. Software isn't just eating the world because of Moore's Law, but also because of Metcalfe's Law. In practice, "software is eating the world" means software is transforming the world. It might make sense to assume that software, as it transforms the world, must be making the world more organized in the process.

But if Moore's Law is Athena, a pure force of reason, Metcalfe's Law is Eris, a pure force of chaos. Firstly, consider the fallacies of distributed computing:
  • The network is reliable
  • Latency is zero.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology doesn't change.
  • There is one administrator.
  • Transport cost is zero.
  • The network is homogeneous.
The first and the last — "The network is reliable" and "The network is homogenous" — are basically equivalent to saying "chaos reigns supreme." No area is ever the same, because the network is not homogenous (and the topology is ever-changing), and things don't always happen the same way they happened before, because the network isn't always there. So chaos reigns over both space (the non-homogenous network) and time (the ever-changing network which is only sometimes there).

Chaos also reigns in a social sense: the network isn't secure, and there are many administrators. So if Moore's Law makes everything it touches more automatic and organized, Metcalfe's Law makes everything it touches less reliable and more unpredictable. An unspoken assumption you can see everywhere is that "software is eating the world" means that the world is becoming more organized along the way. But since networking is an implicit fundamental in the definition of software today, everytime software makes the world more organized, it brings networking along with it, and networking makes everything more chaotic.

Everything that software eats becomes newly organized and newly chaotic. Because you have a new form of organization replacing an old form of organization, while a new form of chaos replaces an old form of chaos, it's impossible to really determine whether or not software, when it eats the world, makes it more organized or more chaotic. The net effect is impossible to measure. You might as well assume that they balance perfectly, and Moore's Law and Metcalfe's Law are yin and yang.

But the thing is, when personal computers were a new idea, they emanated order. You typed explicit commands; if you got the command perfectly right, you got what you wanted, and it was the same thing, every time. They didn't have the delays that you get when you communicate with a database, let alone another computer on an unreliable and sometimes absent network. They didn't even have the conceptual ambiguity that comes with exploring a GUI for the first time.

Even the video games back then were mostly deterministic. It's why big design up front looks so insane to developers today, but made sense to smart people at the time. During WWII, the cryptographers who developed computing itself were mathematicians who based everything about computing on rock-solid, Newtonian certainties. You did big design up front because everything was made of logic, and logic is eternal.

This is no longer the case, and this will never be the case again. And this is what feels different about computers in 2016. A few decades ago, "non-deterministic computer" was a contradiction in terms. Today, "non-deterministic computer" is a perfect definition for your iPhone. Everything it does depends on the network — which may or may not exist, at any given time — and you can only use it by figuring out a graphical, haptic interface which might be completely different tomorrow.

Every Netflix client I have operates like a non-deterministic computer. Here's a very "old man yells at cloud" rant. This happened. I go on Netflix, and I start watching a show. There's some weird network glitch or something, and my Apple TV restarts. I go on Netflix a second time, and I go to "previously watched," but the Apple TV didn't tell the network in time, so Netflix doesn't know I was watching this show. So I go manually search for it, and when I hit the button to watch it, Netflix offers me the option of resuming playback where I was before. So it knows I was watching it, now.

Basically, whatever computer cached the list of previously watched shows didn't know the same thing that the computer which cached the playback times did know.

A few decades ago, it was impossible for a computer to have this problem, where the right hand doesn't know what the left hand is doing. Today, it's inherent to computers. And this has long-term consequences which are subtle but deep. Kids who see chaos as an intrinsic element of computing from the moment they're old enough to watch cartoons on Netflix are not going to build the same utopian fantasies that you get from Baby Boomers like Ray Kurzweil. My opinion of transhumanists is that they formed an unbreakable association between order and computers back when networks weren't part of the picture, and they never updated their worldview to integrate the fundamental anarchy of networks.

I don't want to old man yells at cloud too much here. That's where you get these annoying rants where people think the iPad is going to prevent kids from ever discovering programming, as if Minecraft were not programming. And I'm already telling you that the kids see the world a different way, like I'm Morley Winograd, millennial expert. But there's a deep and fundamental generation gap here. Software used to mean order, and now it just means a specific blend of order and chaos.

Wednesday, February 10, 2016

Theory: In Fiction, Curiosity Is Equal With Conflict

You've probably seen this talk, from a few years ago:

I've come to the conclusion that curiosity is as important as conflict in storytelling.

First, consider genre fiction. What would the British murder mystery be, without curiosity? Or consider what William Gibson said:
I wanted the reader to feel con­stantly somewhat disoriented and in a foreign place, because I assumed that to be the highest pleasure in reading stories set in imaginary futures.
Mysteries run on the "whodunnit" question. Science fiction runs on a more ambient curiosity, diffused to the setting rather than localized in a very specific piece of the plot. You're constantly trying to find out how this future setting differs from your relatively mundane reality. Curiosity drives horror fiction as well; imagine a horror story which started out like this:
There's a very specific type of monster that a lot of people don't know about. It's invulnerable to bullets, so shooting at it won't help you, but it's vulnerable to fire, so if you set it on fire, you'll be fine. It's nocturnal, so you might not be able to tell how big it is when you see it; fortunately, we can tell you that it's about eight feet tall, but only weighs about a hundred pounds. It attacks seemingly random individuals on a seemingly random schedule. However, there's a simple principle which allows you to predict whom it will attack, and when.
That would not be an effective horror story. It's more like an animal control manual. Every time you get the facts, the monster gets less scary. When the attacks seem random, that's terrifying. When you can call them ahead of time, they're not. This fundamental fact is the reason why horror video games can degrade into action video games which merely have unsettling artwork: once you understand the monster's mechanics, it's less of a monster, and more just an ugly problem.

The way horror uses curiosity sits in the middle between the very diffuse way sci-fi uses curiosity, and the very concentrated way mystery uses it. With mystery, you want an exact piece of the plot. With sci-fi, you want the world around the story. And with horror, you never find out enough information that you can imagine solving the problem until the characters are trapped in a situation where they wouldn't have access to the solution. But all three of these genres require unanswered questions to operate.

Everything I've ever read on narrative has said that conflict's essential. I've never seen anything which acknowledged the role of curiosity. Never any mention of the balancing act you have to play between revealing too little or too much.

The thing that made me absolutely certain that curiosity is as fundamental and essential as conflict was the television adaptation of The Expanse. As an avid fan of the books, I enjoyed the first few episodes despite their many flaws, but grew more and more frustrated with the show's inferiority to the books. I re-read the first two books just to get the taste of the show out of my mouth, and then I began re-reading the first book again.

This time, I've set up a spreadsheet and I'm filling it out chapter by chapter. The spreadsheet tracks what questions are raised in each chapter, what questions are answered, and — perhaps most importantly — what question each chapter ends on. Because in my re-reading, I noticed that "end on a question" seems to be a core organizing principle in these books. Most chapters end on cliffhangers, and a chapter which doesn't end on a cliffhanger will still at least end on a question.

There's also a column in my spreadsheet for "box within a box," because — to use JJ Abrams's term — The Expanse series of novels doesn't just have you constantly wondering "what's in the box?" Nearly every time you find out, in these books, what you find inside the box is almost always another box. And you usually find that box inside another box right at the end of a chapter. The books switch protagonists on a chapter-by-chapter basis, and every chapter opens by addressing some of the questions raised in the last chapter which "starred" that particular protagonist. Chapters also typically answer a previous question, then raise a low-stakes question, and then open up several new questions, amping up the stakes until they get to a cliffhanger, at which point the chapter ends.

It's a very addictive experience, and it's a cycle which continues throughout the book. The Expanse novels use these Matryoshka stacks of boxes within boxes as a propulsion mechanism, driving you from the end of one chapter into the beginning of the next, making these books extremely difficult to put down. Typically, when a new Expanse novel comes out, I read the whole thing in less than a day, putting aside just about everything else in my life.

I don't write as much fiction as I'd like, so I probably won't have time to apply this insight until 2017. But whatever I write next is going to steal a simple rule from The Expanse: end every scene on a question.

Is Twitter Optimizing For Users Who Even Exist?

A widely dreaded new Twitter feature became a reality today, but it's optional.
You follow hundreds of people on Twitter — maybe thousands — and when you open Twitter, it can feel like you've missed some of their most important Tweets. Today, we're excited to share a new timeline feature that helps you catch up on the best Tweets from people you follow.

Here's how it works. You flip on the feature in your settings; then when you open Twitter after being away for a while, the Tweets you're most likely to care about will appear at the top of your timeline – still recent and in reverse chronological order.
It's good to see that Twitter's notoriously ever-changing and tone-deaf management is listening, a little, for a change. But there are obviously better things Twitter could be doing with its energy here, and by Twitter's own reasoning, this only solves problems for a subset of its user base:
You follow hundreds of people on Twitter — maybe thousands — and when you open Twitter, it can feel like you've missed some of their most important Tweets.
How big is that subset? Who has this problem?

Let's assume for the sake of argument that "important Tweets" is even a meaningful phrase, that a tweet which is important can exist, and that capitalizing "Tweet" is an honest example of clear writing. It's obviously a deliberate attempt to avoid trademark genericization, but let's just pretend that anyone else but Twitter employees, anywhere in the universe, ever capitalizes "tweet," for the sake of argument, and further assume that "important Tweets" are a thing which really exist.

Let's give Twitter all this bullshit that they're trying to get away with, and then just ask: does their argument even make sense under its own false assumptions? Who out there is bummed that they missed an important "Tweet" because they follow thousands of people on Twitter?