Wednesday, December 31, 2014
I met James at a Ruby conference, probably in 2008. Later, I stopped going to Ruby conferences, but we stayed in touch via email and text and very occasionally Skype. In 2011, I probably sent more drunk emails and/or texts to James than to any other person. Not 100% sure, I don't have precise statistics on this, for obvious reasons, but I hope it paints a picture.
I have very specific dietary restrictions that make travel a real pain in the ass for me, but I figured out some workarounds, and last October I went to New York for a Node.js conference. While there, I met up with James for drinks with a few other people from the Ruby world. The next day I dropped by his office because he wanted to show off his showroom. It was pretty awesome. He was stoked about his new job as CTO of Normal Ears, as well as his new apartment, and his relocation to New York in general. With a sometimes cynical sense of humor and a badass attitude, he was kind of like a born New Yorker. Like somebody who had finally found their ideal habitat.
The last thing James ever said to me was that it had been 4 years since we had last hung out in person, and I shouldn't make it four more. I made a mental note to figure out some excuse to come back to New York in 2015.
I really wish I was at his funeral right now.
Although I've met a ton of really smart people throughout my life, there have been very few that I ever really bothered to listen to, probably owing to my own numerous and severe personality problems. But I listened to James, I think more so than he guessed. After talking to James about jazz, I spent weeks and weeks on the harmonies and melodies in the music I made. After spying on his Twitter conversations about valgrind, I went and learned C. James was the only skeptic on Node.js I ever bothered taking seriously.
He was the best kind of friend: I would always hold myself to higher standards after talking to him.
Honestly, I cannot fucking comprehend his absence. It feels like some insane hoax. And although he was a good friend, he was a light presence in my life. For others, it must be so much worse. Utmost sympathies to his family and his other friends. This was an absolutely terrible loss.
Wednesday, December 17, 2014
Since then, my concept of conference swag has kind of exploded. The swag bag at RobotsConf was insane. In fact, the RobotsConf freebies were already crazy delicious before the conf even began.
About a month before the conference, RobotsConf sent a sort of "care package" with a Spark Core, a Spark-branded Moleskine notebook (or possibly just a Spark-branded Moleskine-style notebook?), and a RobotsConf sticker.
At the conference, the swag bag contained a Pebble watch, an Electric Imp, an ARDX kit (Arduino, breadboard, speaker, dials, buttons, wires, resistors, and LEDs), a SumoBot kit (wheels, servos, body, etc.), a little black toolbox, a Ziplock bag with several AA batteries, and a RobotsConf beanie. There were a ton of stickers, of course, and you could also pick up a bright yellow Johnny Five shirt.
Many people embedded LEDs in the eyes of their RobotsConf hats, but I wasn't willing to risk it. I live in an area with actual snow these days, so I plan to get a lot of practical usefulness out of this hat.
Spark handed out additional free Spark Cores at the conference, so I actually came home with two Spark Cores. This means, in a sense, that I got five new computers out of this: the Pebble, the Arduino in the ARDX kit, the Electric Imp, and both Spark Cores. Really just microcontrollers, but still exciting. And of these five devices, the Pebble can connect to Bluetooth, while the Imp and Spark boxes can connect to WiFi.
Likewise, although you couldn't take them home on the plane with you, there were a bunch of 3D printers you could experiment with. All in all, an amazing geeky playground. The only downside is that presented a tough act to follow, for Santa Claus (and/or Hanukkah Harry).
Friday, December 12, 2014
Tuesday, December 9, 2014
First, it was overwhelming, in a good way. Second, there was so much to do that the smart way to approach it is probably the same as the smart way to approach Burning Man: go without a plan the first year, make all kinds of crazy plans and projects every subsequent year.
The conf's split between a massive hackerspace, a roughly-as-big lecture space, and a small drone space. You can hop between any/all of these rooms, or sit at various tables outside all these rooms. The table space was like half hallway, half catering zone. Outside, there were more tables, and people also set up a rocket, a small servo-driven flamethrower (consisting of a Zippo and a can of hairspray), and a kiddie pool for robot boats.
I arrived with no specific plans, and spent most of the first day in lectures, learning the basics of electronics, robotics, and wearable tech. But I also took the time, that first day, to link up a Parrot AR drone with a Leap Motion controller.
Sorry Rubyists - the code on this one was all Node.js. Artoo has support for both the Leap Motion and the Parrot AR, but Node has embraced hardware hacking, where Ruby (except for Artoo) kinda remains stuck in 2010.
I started with this code, from the
With this, I had the drone taking off into the air, wandering around for a bit, doing a backflip (or I think, more accurately, a sideflip), and then returning to land. Then I plugged in the Leap Motion, and, using Leap's Node library, it was very easy to establish basic control over the drone. The control was literally manual, in the classic sense of the term - I was controlling a flying robot with my hand.
Here's the code:
As you can see, it's very straightforward.
With this code, when you first put your hand over the Leap Motion, the drone takes off. If you hold your hand high above the Leap, the drone ascends; if you lower it, the drone descends. If you take your hand away completely, the drone lands.
frame.gestures.forEachbit was frankly just a failure. I got no useful results from the gesture recognition whatsoever. I want to be fair to Leap, though, so I'll just point out that I hacked this whole thing together inside about twenty minutes. (Another caveat, though: that twenty minutes came after about an hour or so of utter puzzlement, which ended when I enabled "track web apps" in the Leap's settings, and I got a bunch of help on this from Andrew Stewart of the Hybrid Group.)
Anyway, I had nothing but easy sailing when it came to the stuff which tracks
pointablesand obtains their X, Y, and Z co-ordinates. I ran off to a lecture after I got this far, but it would be very easy to add turning based on X position, or forward movement based on Z position. If you read the code, you can probably even imagine how that would look. Also, if I'd had a bit more time, I think I probably could have synced flip animations to gestures.
In fact, I've lost the link, but I believe I saw that a pre-teen girl named Super Awesome Maker Sylvia did some of these things at RobotsConf last year, and a GitHub project already exists for controlling Parrot drones with Leap Motion controllers (it's a Node library, of course). There was a small but clearly thrilled contingent of young kids at RobotsConf, by the way, and it was pretty amazing to see all the stuff they were doing.
Thursday, December 4, 2014
Tuesday, December 2, 2014
For this one, I created the video in Cinema 4D and Adobe After Effects, and I made the music with Ableton Live. Caveat: it looks better on my machine. YouTube's compression has not been as kind as I would have hoped.
The assignment for this one was to create an intro credits sequence for an existing film, and I chose Scott Pilgrim vs. The World. The film's based on a series of graphic novels, which I own, so I scanned images from the comics, tweaked them in Photoshop, and added color by animating brush strokes in After Effects.
The soundtrack's a song called "Scott Pilgrim," by the Canadian band Plumtree. The author of the comics named the character after the song.
I figured I had aced the basic skill of coloring in a black-and-white image back when I was five, with crayons, but it was actually an arduous process. If you count brush stroke effects as layers, two comps in this animation had over 300 layers.
Ironically, I picked this approach because I had limited time. I had to turn in both projects a little early. On the last day of class, I'll be on an airplane back from RobotsConf. I have to give a big shout-out to my employer, Panda Strike, not just for sending me to this awesome conference, which I'm very excited about, but also for being the kind of company which believes in flexible scheduling and remote work. Without flexible scheduling and remote work, I would have a much harder time studying animation.
Sunday, November 30, 2014
Some notes from Hacker News:
"He was funny, patient, and most of all kind."
"We [left Engine Yard for] our own colo but Ezra helped us at every step of the way, long after it was clear we weren't coming back."
From @antirez: "Ezra was the first to start making Redis popular..."
"Ezra was a innovator in the glass pipe world. A world class artist that reinvented lampworking."
"He used to fly little radio controlled helicopters all over our office at Engine Yard."
I had a conference call with Ezra in 2006 as part of a project and was a total fanboy about it. His work on the Yakima Herald, before he founded Engine Yard, was one of the main things that made me start taking Rails seriously back in those days, back before the hype train really even began. One time we shared a car to the airport after a conference, and of course I saw him at a ton of conferences beyond that as well. He was a very cool guy, and he'll be missed.
Wednesday, November 19, 2014
Thursday, October 16, 2014
The worst thing about this picture is that I didn't have it on my hard drive. I found it via Google Images. But the best part is it took me at least 3 minutes to find it, so, by modern standards, it's pretty obscure. Or at least it was, until I put it here on my blog again.
(Actually, the worst thing about this image is that I just found out it's now apparently being used to advertise porn sites, without my knowledge, consent, or participation.)
Anyway, back in the day, this picture went on Myspace, because of course it did. And eventually the friend who took this picture became a kindergarten teacher, while I became a ridiculously overrated blogger. That's not just an opinion, it's a matter of fact, because Google over-emphasizes the importance of programmer content, relative to literally any other kind of content, when it computes its search rankings. And so, through the "magic" of Google, the first search result for my former photographer's name - and she was by this point a kindergarten teacher - was this picture on my Myspace page.
She emailed me like, "Hi! It's been a while. Can you take that picture down?"
And of course, the answer was no, because I hadn't used Myspace in years, and I didn't have any idea what my password was, and I didn't have the same email address any more, and I didn't even have the computer I had back when Myspace existed. Except it turned out that Myspace was still existing for some reason, and was maybe causing some headaches for my friend as well. I have to tell you, if you're worried that you might have accidentally fucked up your friend's career in a serious way, all because you thought it would be funny to strap a dildo to your face, it doesn't feel awesome.
(And by the way, I'm pretty sure she's a great teacher. You shouldn't have to worry that some silly thing you did as a young adult, or in your late teens, would still haunt you five to fifteen years later, but that's the Internet we built by accident.)
So I went hunting on Myspace for how to take a picture down for an account you forgot you had, and Myspace was like, "Dude, no problem! Just tell us where you lived when you had that account, and what your email address was, and what made-up bullshit answers you gave us for our security questions, since nobody in their right minds would ever provide accurate answers to those questions if they understood anything at all about the Internet!"
So that didn't go so well, either. I didn't know the answers to any of those questions. I didn't have the email address any more, and I had no idea what my old physical address was. I would have a hard time figuring out what my current address is. Probably, if I needed to know that, I might be able to find it in Gmail. That's certainly where I would turn first, because Google has eaten my ability to remember things and left me a semi-brainless husk, as most of you know, because it's done the same thing to you, and your friends, and your family.
Speak of the devil - around this time, Google started pressuring everybody in the fucking universe to sign up for Google Plus, Larry Page's desperate bid to turn Google into Facebook, because who on earth would ever be content to be one of the richest people in the history of creation, if Valleywag stopped paying attention to you for five whole minutes?
My reaction when Google's constantly like, "Hey Giles, you should join Google Plus!"
Since then, my photographer/teacher friend fortunately figured out a different way to get the image off Myspace, and I made it a rule to avoid Google Plus. Having had such a negative experience with Myspace, I took the position that any social network you join creates presence debt, like the technical debt incurred by legacy code - the nasty, counterproductive residue of a previous identity. So I was like, fuck Google Plus. I lasted for years without joining that horrible thing, but I finally capitulated this summer. I joined a company called Panda Strike, and a lot of us work remote (myself included), so we periodically gather via Google Hangouts to chat and convene as a group.
But just because I had consented to use Hangouts, that didn't mean I was going down without a fight.
When I "joined" Google Plus, I first opened up an Incognito window in Chrome. Then I made up a fake person with fake biographical attributes and joined as that person. Thereafter, whenever I saw a Google Hangouts link in IRC or email, I would first open up an Incognito window, then log into Google Plus "in disguise," and then copy/paste the Hangouts URL into the Incognito window's location textfield, and then - and only then - enter the actual Hangout.
This is, of course, too much fucking work. But at least it's work I've created for myself. Plenty of people who are willing to go along with Google's bullying approach to selling Google Plus still get nothing but trouble when they try to use Hangouts.
If anyone knows how to operate google hangout I'd appreciate any help. People don't get my invitations, but they can call me into a 2nd one?— Pat Maddox (@patmaddox) August 26, 2014
The Hangouts app on a Google branded device is hilariously shitty.— John Van Enk (@sw17ch) February 11, 2014
Google Hangouts has the worst user interface for starting and initiating a session. It's like an autistic idiot designed it.— hussein kanji (@hkanji) September 27, 2013
Horrible moment of the morning: Google tricked me into replacing GChat with a Google Hangouts interface.— Kashmir Hill (@kashhill) May 24, 2013
Google Hangouts has the worst, most retarded interface I've ever seen. Every conference call I struggle to enter the hangout.— hussein kanji (@hkanji) August 26, 2013
Protip: don't even tolerate this bullshit.
Imagine how amazing it would be if all you needed to join a live, ongoing video chat was a URL. No username, no password, no second-rate social network you've been strong-armed into joining (or pretending to join). Just a link. You click it, you're in the chat room, you're done.
Panda Strike has built this site. It's called GlideRoom, and it's Google Hangouts without the hangups, or the hassle, or indeed the shiny, happy dystopia.
Clicking "Get A Room" takes you to a chat room, whose URL is a unique hash. All you do to invite people to your chat room is send them the URL. You don't need to authorize them, authenticate them, invite them to a social network which has no other appealing features (and plenty of unappealing ones), or jump through any other ridiculous hoops.
We built this, of course, to scratch our own itch. We built this because URLs are an incredibly valuable form of user interface. And yes, we built it because Google Plus is so utterly bloody awful that we truly expect its absence to be a big plus for our product.
So check out Glideroom, and tweet at me or the team to let us know how you like it.
Tuesday, October 14, 2014
Quoting Kevin Kelly's simultaneously awesome and awful book What Technology Wants, which I reviewed a couple days ago:
Thomas Edison believed his phonograph would be used primarily to record the last-minute bequests of the dying. The radio was funded by early backers who believed it would be the ideal device for delivering sermons to rural farmers. Viagra was clinically tested as a drug for heart disease. The internet was invented as a disaster-proof communications backup...technologies don't know what they want to be when they grow up.
When a new technology migrates from its intended use case, and thrives instead on an unintended use case, you have something like the runaway successes of invasive species.
In programming, whether you say "best tool for the job" or advocate your favorite One True Language™, you have an astounding number of different languages and frameworks available to build any given application, and their distribution is not uniform. Some solutions spread like wildfire, while others occupy smaller niches within smaller ecosystems.
In this way, evaluating the merits of different tools is a bit like being an exobiologist on a strange planet made of code. Why did the Ruby strain of Smalltalk proliferate, while the IBM strain died out? Oh, because the Ruby strain could thrive in the Unix ecosystem, while the IBM strain was isolated and confined within a much smaller habitat.
However, sometimes understanding technology is much more a combination of archaeology and linguistics.
Go into your shell and type
man 7 re_format.
Regular expressions (``REs''), as defined in IEEE Std 1003.2 (``POSIX.2''), come in two forms: modern REs (roughly those of egrep(1); 1003.2 calls these ``extended'' REs) and obsolete REs (roughly those of ed(1); 1003.2 ``basic'' REs). Obsolete REs mostly exist for backward compatibility in some old programs; they will be discussed at the end.
manpage, found on every OS X machine, every modern Linux server, and probably every iOS or Android device, describes the "modern" regular expressions format, standardized in 1988 and first introduced in 1979. "Modern" regular expressions are not modern at all. Similarly, "obsolete" regular expressions are not obsolete, either; staggering numbers of people use them every day in the context of the
grepcommands, for instance.
To truly use regular expressions well, you should understand this; understand how these regular expressions formats evolved into
awk; understand how Perl was developed to replace
awkbut instead became a very popular web programming language in the late 1990s; and further understand that because nearly every programming language creator acquired Perl experience during that time, nearly every genuinely modern regular expressions format today is based on the format from Perl 5.
Human languages change over time, adapting to new usages and stylings with comparative grace. Computer languages can only change through formal processes, making their specifics oddly immortal (and that includes their specific mistakes). But the evolution of regular expressions formats looks a great deal like the evolution which starts with Latin and ends with languages like Italian, Romanian, and Spanish - if you have the patience to dig up the evidence.
So far, I have software engineering including the following surprising skills:
Sunday, October 12, 2014
So I recommend this book, but with a hefty stack of caveats. Mr. Kelly veers back and forth between revolutionary truths and "not even wrong" status so rapidly and constantly that you might as well consider him to be a kind of oscillator, producing some sort of waveform defined by his trajectory between these two extremes. The tone of this oscillator is messianic, prophetic, frequently delusional, but also frequently right. The insights are brilliant but the logic is often terrible. It's a combination which can make your head spin.
The author seems to either consider substantiating his arguments beneath him, or perhaps is simply not familiar with the idea of substantiating an argument in the first place. There are plenty of places where the entire argument hinges on things like "somebody says XYZ, and it might be true." No investigation of what it might mean instead if the person in question were mistaken. This is a book which will show you a graph with a line which wobbles so much it looks like a sine wave, and literally refer to that wobbling line as an "unwavering" trend.
He also refers to "the optimism of our age," in a book written in 2010, two years after the start of the worst economic crisis since the Great Depression. The big weakness in my oscillator metaphor, earlier, is that it is an enormous understatement to call the author tone-deaf.
Then again, perhaps he means the last fifty years, or the last hundred, or the last five hundred. He doesn't really clarify which age he's referring to, or in what sense it's optimistic. Or maybe when he says "our age," the implied "us" is not "humanity" or "Americans," but "Californians who work in technology." Mr. Kelly's very much part of the California tech world. He founded Wired, and I actually pitched him on writing a brief bit of commentary in 1995, which Wired published, and that was easily the coolest thing that happened to me in 1995.
Maybe because of that, I'm enjoying this book despite its flaws. It makes a terrific backdrop to Charles Stross's Accelerando. It's full of amazing stuff which is arguably true, very important if true, and certainly worth thinking about, either way. I loved Out Of Control, a book Mr. Kelly wrote twenty years ago about a similar topic, although of course I'm now wondering whether I was less discerning in those days, or if Mr. Kelly's writing went downhill. Take it with a grain of salt, but What Technology Wants is still worth reading.
Returning again to the oscillator metaphor, if a person's writing about big ideas, but they oscillate between revolutionary truths and "not even wrong" status whenever they get down to the nitty-gritty details, then the big ideas they describe probably overlap the truth about half the time. The question is which half of this book ultimately turns out to be correct, and it's a very interesting question.
Obviously, the solution was to remove After Effects from the OS X Dock, which is a crime against user experience anyway, and replace the dock's launcher icon with a shell script. The shell script only launches After Effects if the relevant hard drive is present and accounted for.
("Vanniman Time Machine" is the name of the hard drive, because reasons.)
Thursday, October 9, 2014
Sunday, October 5, 2014
Friday, October 3, 2014
The instructor mentions the old rule of thumb that you're best to avoid conversations about religion and politics, and says that he thinks drum technique should be added to the list. He says that during the DVD, he'll tell you that certain moves are the wrong moves to make, but that any time he says that, it really means that the given move is the wrong move to make in the context of the technique he's teaching.
He then goes on to give credit to drummers who play using techniques that are different from his, and to say that it's your job as a drummer to take every technique with a grain of salt and disavow the whole idea of regarding any particular move as wrong. Yet it's also your job as a student of any particular technique to interpret that technique strictly and exactly, if you want to learn it well enough to use it. So when you're a drummer, the word "wrong" should be meaningless, yet when you're a student, it should be very important.
Programming has this tension also. If you're a good programmer, you have to be capable of understanding both One True Way fanatacism and "right tool for the job" indifference. And you have to be able to use any particular right tool for a job in that particular's tool One True Way (or choose wisely between the options that it offers you).
Monday, September 22, 2014
Sunday, August 31, 2014
The threshold for this effect is
font-weight: 500. At that weight,
font-family: "Futura"will indeed produce Futura; at
font-weight: 501and above,
font-family: "Futura"will actually produce Futura Condensed Extra Bold.
By the way, at any weight,
font-family: "Futura Condensed Extra Bold"will produce Times New Roman. I'm not sure why; the charitable explanation is ignorance.
Tuesday, August 26, 2014
In my shell, instead of sudo, you use the computer's middle name: "Macbook THELONIOUS Air..." to tell it you are really, really serious.— Reginald Braithwaite (@raganwald) August 19, 2014
Let's call Mr. Braithwaite's bluff. I don't think this tweet is true, but it could be. You can do this in bash:
Not the most elegant code, of course. And I only tested this with
MacBook THELONIUS Air vi /etc/hosts, so it may fail on other commands. I'd expect it to have issues with output redirection, so this might actually be a superior implementation:
But this approach will give you false positives with no middle name supplied, and costs you the vital sassback feature.
If you want to become a better programmer, you could do worse than to just pick somebody like @raganwald, follow them on Twitter, and then, any time they complain about not having a particular tool or product, implement that tool or product. Of course, some complaints would be harder to resolve than others.
Fixing a bug caused by re-rendering HTML entities. We fixed this problem 15 years ago with type-checking. Fuck.— Reginald Braithwaite (@raganwald) August 18, 2014
Despite the golden opportunity a tweet like this presents — i.e., I could look like a genius if I went off and implemented HTML5 with type-checking — I wanted to reply to it by telling Reg to go and implement this himself. Less effort, better trolling, so: pure win.
However, I decided not to troll Reg about this, since I kind of troll him too often (for instance, consider this blog post). Instead, I phrased it as general advice and got a bunch of retweets out of it.
1/ the bad news is every problem in comp sci will re-assert itself because programming is pop cult. the good news is you can see them coming— an actual panda (@gilesgoatboy) August 18, 2014
2/ easiest way to do really well as a programmer is spot whatever dumb shit we're doing that was fixed 25 yrs ago and re-implement the fix— an actual panda (@gilesgoatboy) August 18, 2014
The remark about "pop culture" is a reference to an Alan Kay interview that both Reg and I frequently reference:
...computing spread out much, much faster than educating unsophisticated people can happen...
the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture...
If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did...
A commercial hit record for teenagers doesn’t have to have any particular musical merits. I think a lot of the success of various programming languages is expeditious gap-filling. Perl is another example of filling a tiny, short-term need, and then being a real problem in the longer term. Basically, a lot of the problems that computing has had in the last 25 years comes from systems where the designers were trying to fix some short-term thing and didn’t think about whether the idea would scale if it were adopted. There should be a half-life on software so old software just melts away over 10 or 15 years.
(Just as an aside, the absence of this half-life is one major difference between programming languages and spoken languages. Where human languages naturally morph over time, computer languages can only morph through their formal definitions, making them weirdly immortal.)
It was a different culture in the ’60s and ’70s; the ARPA (Advanced Research Projects Agency) and PARC culture was basically a mathematical/scientific kind of culture and was interested in scaling, and of course, the Internet was an exercise in scaling. There are just two different worlds, and I don’t think it’s even that helpful for people from one world to complain about the other world—like people from a literary culture complaining about the majority of the world that doesn’t read for ideas. It’s futile.
I don’t spend time complaining about this stuff, because what happened in the last 20 years is quite normal, even though it was unfortunate. Once you have something that grows faster than education grows, you’re always going to get a pop culture.
To recap, I was saying that all you need to do is dig through the great old ideas of the earlier, more rigorous culture and you'll find pure gold. Of course, @raganwald had already got there:
Wisdom is figuring out which old successes are now cargo cult legacies, and which old failures are now brilliant disruptions.— Reginald Braithwaite (@raganwald) August 18, 2014
This is especially relevant in any discussion of Alan Kay, because one very interesting thing about Alan Kay is that he and Steve Jobs frequently predicted that the same technologies (and the same uses of those technologies) would take over the world. But Alan Kay never seemed hugely interested in when those technologies would take over the world, while Steve Jobs was obsessed with questions like that, and that, in a nutshell, is why Alan Kay is a name known only to serious programmers, while Steve Jobs is a name known to everyone.
When the Mac first came out, Newsweek asked me what I [thought] of it. I said: Well, it’s the first personal computer worth criticizing. So at the end of the presentation, Steve came up to me and said: Is the iPhone worth criticizing? And I said: Make the screen five inches by eight inches, and you’ll rule the world. — Alan Kay
And this is as good a time as any to demystify another of my tweets:
wait what is this nonsense why is Giles wearing a panda costume? pic.twitter.com/EC7QDozjZD— an actual panda (@gilesgoatboy) August 18, 2014
I'm wearing a panda costume because I've joined a company called Panda Strike.
I've known the CEO, Dan Yoder, for years through the Ruby community in Los Angeles. In fact, I met him through the ruby-lang mailing list back in 2007, when I was trying to put together a Ruby users' group. And in 2008, LA Ruby got off the ground, with help from Dan's company and mine, but let's get back to the main theme here. Dan got to a particular future a little early.
Around the time Sinatra emerged, Dan had just written a web framework called Waves, which had a terse API sort of similar to Sinatra's, but was built on the foundation of a deeper understanding of the web and HTTP. (Panda Strike co-founder Matthew King also worked on Waves.) Here's a very superficial comparison of Sinatra and Waves. Both the Sinatra and Waves code examples are setting up a handler for hitting the
/showURL with a
You'll notice in this example that while the
on :get, :showsyntax works in terms of an HTTP method and a path, which is pretty similar to Sinatra's
get "/show", the object model starts with a resource, which is a thing which handles URLs. Waves had a
modelsdirectory, like Rails, but it had a
resourcesdirectory too, and that directory played a more central role. Waves was a resource-first framework, whereas you could think of Rails as a model-first framework, and consider Sinatra a URL-first microframework.
Waves and Sinatra both had these routing methods where you would define a URL handler by specifying first an HTTP method and next a path. Both frameworks disregarded MVC in favor of a URL-matching approach. That almost qualifies as an example of multiple discovery, the phenomenon where multiple people make the same scientific discoveries and/or inventions at roughly the same time.
Even though these APIs are only a little different, the Sinatra approach had a serious advantage in its aggressive simplicity. But it kind of skipped the whole issue of resources, and that core idea at the root of Waves, that a framework should put resources first, was ahead of its time. In recent years, it's kind of become a thing to break overly model-focused legacy Rails apps into resource-oriented services.
I think Waves was delivering a future just a little too early. But Waves is Ruby history now. Today Panda Strike (which includes several other developers and a third co-founder, devops lead Lance Lakey) runs mostly on CoffeeScript and Node.js.
Any controversy will have to wait for now.
I'll probably get into these tech choices in future blog posts, and Dan already has.
- It's great to share code between client and server.
- It's great when your network programming APIs are based on sockets and streams.
@tenderlove Also the fact that Node pervasively uses streams gets around a core problem w/ Rack middleware.— Call Me Maybe Monad (@jcoglan) August 21, 2014
I think Panda Strike's taking an approach to the web which is more modern, more relevant, and more innovative, both in terms of the projects we're using, and the ones we're creating. I think we're going to deliver some futures right on time. I'm planning to blog about that some more, soon.
Thursday, July 31, 2014
A GitHub Drama
After abandoning Node.js for Go, TJ Holowaychuk apparently made his separation official by selling off the branding and official GitHub "ownership" of his Express framework to StrongLoop, a Node.js company whose projects include software services, consulting services, support, and free software. (StrongLoop's CEO, by the way, is no stranger to the concept of businesses based on free software, having previously sold his startup Makara to Red Hat and developed Red Hat's OpenShift product - Red Hat being the company which pioneered open source business models.)
As an aside, I'm often disturbed by how many things GitHub is these days.
.@GitHub is awesome, of course, but it's also so obviously a vim which labors under the deranged misapprehension of being a Facebook.— タチコマ (@gilesgoatboy) July 28, 2014
The latest Node.js drama undermines my tweeted theory, because much of the drama unfolds on GitHub. So maybe GitHub's a Twitter which used to be an emacs?
Anyway, here's the history. If my retelling fails at fairness, apologies to all involved.
First, StrongLoop announced the sponsorship on its blog. A major Express contributor immediately filed an issue on GitHub: "This repo needs to belong in the expressjs org." The discussion that unfolded there is interesting (although currently locked), but here's a summary: Holowaychuk transferred ownership to StrongLoop without either asking or informing the Express community beforehand. StrongLoop's been committed to Node.js for a good while now, and hopes to support Express with documentation and continued development. However, the Express community may have taken over for Holowaychuk some time ago, so there's some contention over whether or not the "ownership" of the project was legitimately his to transfer in the first place.
An angry blog post argues that it was not:
When TJ Holowaychuk lost interest in maintaining Express, he did the right thing (for a change) by letting others take over and keep it going. In open source, that meant the project was no longer his, even if it was located under his GitHub account – a common practice when other people take over a project.
Keeping a project under its original URL is a nice way to maintain continuity and give credit to the original author. It does not give that person the right to take control back without permission, especially not for the sole purpose of selling it to someone else...
What makes this particular move worse, is the fact that ownership was transferred to a company that directly monetizes Express by selling both professional services and products built on top of it. It gives StrongLoop an unfair advantage over other companies providing node services by putting them in control of a key community asset. It creates a potential conflict of interest between promoting Express vs. their commercial framework LoopBack (which is built on top of Express).
This move only benefits StrongLoop and TJ Holowaychuk, and disadvantages the people who matter the most – the project active maintainers and its community.
Holowaychuk responded with a blog post of its own, pointing out that he had communicated with Doug Wilson of the Express community, asking Wilson if he'd like some of the proceeds of the deal:
My intent was to share said compensation with Douglas since he has been the primary maintainer on Express lately. I signalled that intent by emailing him...
I don't want to wade into the drama here, which is why I've made an effort here to be dispassionate and objective. I'm totally happy to let that shake out however it shakes out. But I have to admit that I think there's a really interesting question at the heart of all this: who owned the Express web framework? Was it really Holowaychuk's to sell?
I find this question interesting because it reminds me of a totally wrong theory I cooked up recently: that being free is what ruined the Ruby web framework Rails.
Totally Wrong Theory: Being Free Ruined Rails
I've previously argued that the Rails/Merb merge was a mistake, and that Rails went off the rails. I came up with my new, totally wrong theory when I was trying to figure out how the Rails/Merb merge happened in the first place.
Before I get into it, I want to point out that one of the major flaws in my theory here is that Rails isn't actually ruined. As I said, the theory is a totally wrong theory (and being totally wrong is obviously another one of its flaws). But I want to explore the idea to illustrate some of the flaws in the purist, old-school definitions of open source software. Because I don't think that theory is correct, either.
That theory comes from Eric Raymond's The Cathedral and the Bazaar, which provides a great statement of the classic concept of what open source is, and what open source means. This essay, and the book it later became, first articulated the idea that "with enough eyeballs, all bugs are shallow," and laid out 19 rules of open source development. For example, "the next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better." Or, "release early. Release often. And listen to your customers."
The Cathedral represents a software development model where developers build code in private and release it in public. The Bazaar represents a model where all development occurs in public. Raymond argues for the Bazaar in favor of the Cathedral. I don't know how development worked in Express, or how it will proceed now, but Rails uses a hybrid model, where the majority of development occurs in public, yet certain decisions happen in private.
Many other projects use this model as well. (Obviously, in the case of Express, the decision to sell sponsorship occurred in private.)
The Rails/Merb merge is one example of a major decision which occurred in private. There was no public debate, just a sudden announcement, with a big thank you to the Merb team for all the free help that would get Rails 2 to Rails 3. But free help isn't always free.
37Signals (now Basecamp) have long advocated turning down unnecessary feature requests, and Rails creator David Heinemeier Hansson took the idea to absurd lengths with his description of Rails as an "omakase" framework. But one explanation for the Rails/Merb merge is that EngineYard said "we'll pay for Rails 3 to happen, as long as Rails 3 is also Merb 2," and members of Rails core forgot their own advice about turning down unnecessary feature requests because, for once, the unnecessary feature requests came along with the offer of (unnecessary) free work.
To be clear, the feature requests, and the free work to support them, were unnecessary in my opinion, but not in the opinion of the people who made the merge happen. I'm going to make an attempt to be objective regarding Express, but when it comes to Rails, that train has already sailed. It's my belief that the Rails/Merb merge brought Rails an incomplete but ambitious modularity it didn't actually need, and that there's an inherent irony here, because Mr. Hansson vigorously and scornfully opposed adding a different kind of modularity to Rails apps: stuff like moving business logic out of Rails models and into simple Ruby objects, moving application logic out of Rails entirely and treating it as a library instead of a framework, and wrapping Rails's ActionMailer system in simpler API calls like
Good Modularity and Bad Modularity
The general theme: how to unfuck a monorail. Many Rails developers wrestle with this theme, but Mr. Hansson seems (in my opinion) to dismiss it categorically and without any significant consideration. (Indeed, so many Rails developers wrestle with this issue that I think it's fair to call it a crucial moment in the lifecycle of most Rails apps.)
Some of these things are a lot easier to do because of the Rails/Merb merge, yet it's interesting to contrast Mr. Hansson's hostility to these ideas with his embrace of the merge. On the one hand, we saw claims of a powerful modularity that either failed to materialize or which proved useful to only a few people.
On the flip side, Rails's creator seemed pretty contemptuous of people who created simpler, more practical forms of modularization to suit the needs of their individual applications. It's a fascinating contradiction, from the developer who once lambasted "architecture astronauts," to attack pragmatic modularization with very immediate causes, while championing an abstract modularity with less obvious usefulness.
I think this was an error in judgement, and I think it happened because the work seemingly came for free. Because why else would a team famous for ignoring feature requests happily embrace an incredibly ambitious set of feature requests?
Managing open source frameworks takes time. Writing code takes time; discussing pull requests takes time; and running a private chat room for your "open source" project takes time.
To unpack that last statement, gaining access to the private Rails core Campfire is a key step in becoming a member of Rails core:
Yehuda gave me access to control the LightHouse tickets and to the Rails CampFire...The fact that I was invited to be a part of the Rails Core Team really surprised me. It was unexpected until I read Yehuda in CampFire saying that the guys with commit access should join the core team after the release of Rails 3 and David was OK with that.
Here's where Rails operates as a hybrid between the Cathedral and the Bazaar. Its core team's private Campfire chat functions as a Cathedral, but its GitHub activity functions as a Bazaar.
The Bizarre Bazaar
The Cathedral and the Bazaar argues that the Bazaar is superior because no one person is smarter than a community of smart people, and because nobody can craft a One True Design™ which is better suited to a problem space than the design which will emerge if you allow lots of people to work on the problem.
Yet the "omakase" philosophy also created a community which operates on the foundation of an unspoken shared disregard for the community's alleged leadership. The sign of an experienced Rails developer is a weird duality; a skilled Rails dev knows the recommendations of Rails core, and ignores or contradicts most of them. As Steve Klabnik said, Rails really has two default stacks, the "omakase" stack and the "Prime" stack, which could also be described as the official stack and the stack which is the default for everybody except 37Signals and utter newbies. There is something just deeply, dementedly messed-up about a community where following best practices, or believing that the documentation is correct, are both sure signs of cluelessness.
Rails is not the only open source project to feature this half-Cathedral, half-Bazaar hybrid. (You could call it a bizarre Bazaar.) Ember works in a similar way, and Cognitect's transit-ruby project features the following disclaimer in their README:
This library is open source, developed internally by Cognitect. We welcome discussions of potential problems and enhancement suggestions on the transit-format mailing list. Issues can be filed using GitHub issues for this project. Because transit is incorporated into products and client projects, we prefer to do development internally and are not accepting pull requests or patches.
(This disclaimer, of course, did not prevent people from filing pull requests anyway, one of which was unofficially accepted.)
Sidekiq & Sidekiq Pro
I believe 37Signals and EngineYard both have funded some of Rails's development, and that they're far from alone in this. I know ENTP did the same when I worked for them, and I believe that's also true of Thoughtbot, Platformatec, several other companies, and of course a staggering number of independent individuals. I'm certain Twitter directly funded some of the work on Apache Mesos, and that Google indirectly funded it as well by contributing to Berkeley's AMP Lab, where Mesos originated. While "open source" was the opposite of corporate development when the idea first swept the world, today most successful open source projects have seen a company, or several companies, pay somebody to work on the project, even though the project then gives the work away for free.
It's an amazing evolution in the economics of software, and something I think everybody should be grateful for.
However, I know of an alternate model, and I have to wonder how Rails might have handled the Merb merge differently, if it had been using this model instead. This is the Sidekiq and Sidekiq Pro model.
In his blog post How to Make $100K in OSS by Working Hard, Mike Perham wrote:
My Sidekiq project isn’t just about building the best background processing framework for Ruby, it’s also a venue for me to experiment with ways to make open source software financially sustainable for the developers who work on it hundreds of hours each year (e.g. me)...
When Sidekiq was first released in Feb 2012, I offered a commercial license for $50. Don’t like Sidekiq’s standard LGPL license? Upgrade to a commercial license. In nine months of selling commercial licenses, I sold 33 for $1,650...
In October last year I announced a big change: I would sell additional functionality in the form of an add-on Rubygem. Sidekiq Pro would cost $500 per company and add several complex but useful features not in the Sidekiq gem...
In the last year selling Sidekiq Pro, I sold about 140 copies for $70,000. Assuming I’ve spent 700 hours on Sidekiq so far, that’s $100/hr. Success! Sales have actually notched up as Sidekiq has become more popular and pervasive: my current sales rate appears to be about $100,000/yr.
If I recall correctly, when he wrote this blog post, Perham was also working full-time as Director of Infrastructure at an ecommerce startup. His blog now lists his job as Founder and CEO of Contributed Systems, whose first product family consists of Sidekiq and Sidekiq Pro. Perham seems to have discovered a really effective model for funding open source software.
What if Rails had used this model? I like to think there's an alternate universe where this happened; where 37Signals gave away Rails for free, and charged a licensing fee for an expanded, more powerful version called Rails Pro.
Rails & Rails Pro
I like to imagine that in this alternate universe, when people wrestled with the paralyzing monorail stage of the Rails app lifecycle, Mr. Hansson and the other members of the Rails core team would have had no choice but to listen to their users, because their business depended on it. I also like to imagine that in this alternate universe, a Merb merge would not have been possible. The financial incentives to think carefully before accepting feature requests, even when they arrive in the form of code, would have been stronger.
Keep in mind that when you send a pull request you're saying, "I wrote some code. I think you should maintain it."— Nicholas C. Zakas (@slicknet) May 29, 2014
But this business model raises a whole bunch of questions, because so many people and companies contributed so much time and effort to make Rails in the first place. Would they have done the same, in this alternate universe? It's one thing when you're contributing to a project "everybody" owns, and another when you're contributing to somebody else's business. (Sidekiq certainly sees a lot of contributions, but Perham does most of the work, and I can't currently peek at the contrib graph for Sidekiq Pro.)
And consider: What happens if Mike Perham wants to sell Sidekiq and Sidekiq Pro? For that matter, what happens if 37Signals wants to sell their interest in Rails? And what if Express had been using this business model? Can you hand off your semi-open-source, semi-commercial project for somebody else to run?
From the "About Us" section on the home page for Mike Perham's company Contributed Systems:
We believe that open source software is the right way to build systems; building products on top of an open source core means the software will be maintained and supported for years to come.
Contributed Systems is a play on the computer science term "distributed systems" and the fact that we allow anyone to contribute to our software.
Sidekiq's popular for a reason: it's really good. And if Sidekiq Pro accepts contributions just like Sidekiq does, then it's neither really open source nor closed source, but more like "gated community source." (Because it's a Ruby gem, so my guess is that it is open source for those who have access to it, but you have to pay to get that access in the first place.)
There's an enormous mess of contradictions here. Software development is basically the only industry making money right now in the entire United States. And yet its foundation is this basically communist idea that everybody will contribute to the greater good. The idea that you can sell sponsorship and/or ownership of a project, as with TJ Holowaychuk and StrongLoop, really exposes these contradictions.
Law Is Hard, Let's Go Hacking
Perham's "gated community" model might be the best approach. Most open source licenses prefer to avoid these issues by entirely disavowing any and all responsibility. It's simpler, but I doubt it's as sustainable. Here's the MIT Public License:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Translation: "no money changes hands, and you can do anything you want as long as you acknowledge authorship, but we take no responsibility at all for anything which happens, so don't ask us for shit."
I am not a lawyer. If you're a lawyer reading this, I have a question for you: would disavowing any and all warranties still even be possible under the law if money had changed hands?
In a similar vein, I don't think any true Bazaar exists, in the sense of Eric Raymond's metaphor, because it's customary in open source projects to yield final decision-making power to whoever started the project, and to refer to that person as the project's Benevolent Dictator for Life. (If Holowaychuk had any real right to sell Express, this might be where it came from.) That one individual person's final decision-making process is inherently closed, and could only be truly open if we all developed the power of telepathy. I think this "BDFL" custom exists because it's much easier to skirt the issue of the contributors' social contract than it is to define anything more specific.
(Pirate Party founder Rick Falkvinge talks about this extensively in his book Swarmwise, which is essentially about how to use the development model of open source software for political purposes instead. He makes the point that adding a formal voting process to a chaotic, ad hoc organization is most likely to alienate the people who would otherwise become the organization's most productive members, because highly productive contributors are not typically fans of overly bureaucratic process.)
Communist Capitalism or Capitalist Communism?
The open source movement dates back as far as the late 1970s, although at that time it was known as the free software movement, and that is actually a different thing. Whereas the free software movement saw software transparency as a requirement for a free society, open source seeks to fit the superior utility of open development practices into a business framework.
The "open source" label was created at a strategy session held on February 3rd, 1998 in Palo Alto, California, shortly after the announcement of the release of the Netscape source code. The strategy session grew from a realization that the attention around the Netscape announcement had created an opportunity to educate and advocate for the superiority of an open development process...
The conferees also believed that it would be useful to have a single label that identified this approach and distinguished it from the philosophically- and politically-focused label "free software."
Open source projects very often use a communist methodology for capitalist purposes. There are times when this duality is tremendously entertaining; for instance, any time a Linux sysdamin tells you "Communism doesn't work," you get a free joke. Likewise, you get a free joke any time somebody tells you that Linux proves all software should be open source, and the joke is the user interface for Blender, an open source 3D graphics and animation package with notoriously incompetent UX. I think it's extremely likely that the only way to produce good software is to balance capitalist interests against a communist methodology, and if I'm correct about that, it would certainly qualify as one of the many reasons software is inherently hard to get right. The inherent tension between these two forces is tremendous. The drama around Express's transfer of ownership springs from that.
I'd love to give you a pat answer to the question of who owns Express.js, but I think it's a big question.
Update: Mike Perham wrote me to say that his customers can contribute to Sidekiq Pro, and that 5-10 customers have, although he has to own the copyright, to keep the publishing/licensing issues from being insane.
Monday, July 7, 2014
Signal Obscura is, in short, is a flexible, wearable Faraday cage. It functions by blocking the signal between cell towers and your cell phone... The model below also features blue LEDs which glow in response to the strength of nearby cell towers, bringing awareness to how exposed the wearer would be to having their data collected, were they not wearing the scarf.
Signal Obscura is not a replacement for other security measures (end-to-end encryption, Tor, etc.) but simply adds another layer of control to the user over the times and places when others may have access to their data....
Signal Obscura was designed as a part of the 48-hour Extreme Wearables Designathon at the Art Center College of Design. Involved in the project were Michelle Leonhart, Barb Noren, Qiyan “Oscar” Li, Ekin Zileli, and Dave Hansungkim.
Saturday, June 28, 2014
1/ Britain had a very strong female leader at the head of a police state in Queen Elizabeth, in the late 1500s, when Shakespeare got started— not a chatbot (@gilesgoatboy) June 29, 2014
2/ so if Hillary Clinton wins in 2016 America will have caught up to where England was in the 1590s— not a chatbot (@gilesgoatboy) June 29, 2014
3/ You can do it, America. I'm rooting for you.— not a chatbot (@gilesgoatboy) June 29, 2014
Sir Francis Walsingham (c. 1532 – 6 April 1590) was principal secretary to Queen Elizabeth I of England from 20 December 1573 until his death, and is popularly remembered as her "spymaster"...
Walsingham was driven by Protestant zeal to counter Catholicism, and sanctioned the use of torture against Catholic priests and suspected conspirators...Walsingham tracked down Catholic priests in England and supposed conspirators by employing informers, and intercepting correspondence. Walsingham's staff in England included the cryptographer Thomas Phelippes, who was an expert in deciphering letters and forgery, and Arthur Gregory, who was skilled at breaking and repairing seals without detection.
Book burning was common in this Elizabethan police state...
Shakespeare's England: It is a land forced into major cultural upheaval for the second time in ten years. It is a society divided by intolerance, a population cowed beneath the iron fist of a brutal and paranoid Police State. It is an unequal society of great wealth and unimaginable poverty...
And just to be clear, I'd probably vote for Hillary in 2016. I'm just saying, if you think it's bad that America hasn't yet caught up to where England was in 1979, you've underestimated the scope of the problem.
Thursday, June 5, 2014
Way back in 2008, at MountainWest RubyConf, somebody high-placed at EngineYard told me that the company funded Merb development because they hoped some of that work would end up in Rails. At the time, I thought the comment made no sense; Rails and Merb were fundamentally different projects with fundamentally different philosophies. But Yehuda Katz (then of EngineYard) announced the Rails/Merb merge only a few months later:
Rails will become more modular, starting with a rails-core, and including the ability to opt in or out of specific components. We will focus on reducing coupling across Rails, and making it possible to replace parts of Rails without disturbing other parts. This is exactly what Merb means when it touts “modularity”...
Rails will be retrofitted to make it easy to start with a “core” version of Rails (like Merb’s current core generator), that starts with all modules out, and makes it easy to select just the parts that are important for your app. Of course, Rails will still ship with the “stack” version as the default (just as Merb does since 1.0), but the goal is to make it easy to do with Rails what people do with Merb today.
This took longer than expected, but it happened, sort of. The initial site generator script is way more pleasant to use as a result, and replacing ActiveRecord with a REST or Mongo client got easier too. That's cool. But the Rails community largely didn't embrace Rails's newfound modularity the way Mr. Katz told us we should expect.
Despite all the new options, I still write Rails apps sometimes — partly because there's a lot of Rails work out there, and partly because I love Ruby (and still kind of love Rails). However, I think the modularization of Rails failed, and in this blog post, I'm aiming for a basic post mortem.
Personally, the people I've seen and worked with in Ruby haven't used Rails's breakout libraries and post-Merb-merge modularity to the extent that Mr. Katz evangelized. One way to understand that is the "Sinatra + ActiveRecord + [many other things]" problem. It's kind of a random tangent, but bear with me. When you need something tiny, Sinatra is awesome, but you can tell you've underestimated the scope of your project if you end up pulling ActiveRecord back in, and then you want migrations, or view helpers, and the bigger your little Sinatra project gets, the more you wonder if you shouldn't just have used Rails, because you're manually importing all its various features.
Sinatra's great for tightly-constrained services, but not so great for projects which might grow in scope, and that makes it a judgement call, because in theory, anything might grow in scope. There's a "tldr: just use Rails" disincentive to actually exploiting Rails's modularity in this fairly shallow and direct way, because you add cognitive overhead and complexity which you could have avoided just by using the more "batteries included" solution. That same disincentive exists with respect to any attempt to reconfigure Rails's architecture, even though it can definitely be worth the effort.
José Valim wrote a terrific book about all the amazing acrobatics you can pull off if you're familiar with the modular components of Rails, and if you compose software with these components, rather than simply building vanilla Rails apps. The only problem is that you kind of have to have José-Valim-level familiarity with Rails's internals to do it well. Mr. Valim's been on the Rails core team for years, and that's a pretty massive time investment at a pretty significant level of skill. So a lot of the modular power of Rails, a major goal which ate up a very significant amount of development time, sits untapped as remarkable power that nobody ever actually uses, because nobody has the years to spend to get on José Valim's level just so they can tackle a few edge cases in ways which will baffle every new programmer they ever onboard, going forward for the entire lifespan of their company.
I'm exaggerating here, and being completely unfair to Mr. Valim's book, but you get the idea. Speaking of shameless rhetorical self-indulgence, Rails's creator David Heinemeier Hansson often receives extremely justifiable criticism for making overly grand statements, but once upon a time, people used to talk a lot more about the intensely beautiful design work he did with Rails at the project's inception.
An ideal Rails app is as rare as an ideal anything else, but without a set of APIs that carefully constrain the problem space down to a manageable subset, it's quite difficult to even start conversations about what to build next. If the overwhelming majority of your web work is about business logic and flow between web pages, what you're going to build next will very probably be either business logic or flow between web pages. But if a substantial part of your web work is reconfiguring architecture, or inventing new architecture from scratch, then "what should we build today?" is a longer conversation, and one which poses challenges to staying focused and effective.
Even today, with the whole shoehorned-in aspect of mobile and JS framework stuff, having a simple canned architecture gives you phenomenal benefits in terms of concentration and peace of mind, at least at your project's outset. If you're dismantling Rails and building something new out of its parts, you're re-opening that can of worms, and that can be expensive, time-consuming, and aggravating. By programmer standards, it's very easy to estimate how long it will take to churn out some familiar chunk of business logic. Building a custom version of a very complex framework takes an unpredictable amount of time and adds a substantial amount of cognitive overhead to a project. It increases your risk of failure, delay, and burnout. If it goes well at all, it'll only be because somebody at your company takes elegant internal API design seriously, and does it well. Dunning-Kruger effects aside, this is a very rare skill.
But the Rails/Merb merge didn't give Rails any of this. In fact, it doesn't seem to have affected many Rails developers directly at all. Very probably, a few companies did take advantage of the new modularity, to solve a few very specific problems, but most people don't know how and don't have the problems which would make it worthwhile in the first place. So the basic problem here is that the Rails/Merb merge wasn't useful to a lot of people, and that it took too long. (In fact, given that many aspects of that modular rewrite still seem unfinished, even today, it might be more accurate to say that it is taking too long.) You have to give the Rails team credit for tackling technical debt, but in this instance, it might not have been worth the effort.
The irony is that Rails developers have formed their own, unofficial, unapproved hacks to supply a much more modest form of modularity in Rails, and Mr. Hansson vigorously opposed this practice about a year and a half ago. It's relatively rare in a Rails app to exploit post-Merb-merge modularity, but it's very common to break your app out into services, and to break god objects into smaller files. Many people who build a Rails app need to do this, sooner or later.
(As an aside, I recently built an unusual thing, namely a Rails app with no
Usermodel — the usual candidate for god object status — and was surprised to discover another object in the system creeping towards god object status instead.)
Many people have noticed that the Rails culture's prone to occasional dysfunction and drama. This is not unique to Rails; it's inherent to the social media aspects of open source. But these aspects sometimes work against the end goal of delivering excellent software. This failure to achieve consensus around the topic of modularity may be a perfect example of community dysfunction. Rails developers who developed common ways to make their architecture more modular, to solve problems they all shared, met with opposition from Mr. Hansson. Yet Rails core embraced a more arcane modularity which nobody turned out to want.
It's an interesting mistake, in my opinion. Great design implies the diligent application of exquisitely careful good judgement. Consider how Rails views squash their problem space down to an approximation of PHP, but Rails then expands back into a full OO system towards the back end. That was revolutionary when it first appeared. It suggested some very deep thinking about questions like "what kind of progamming is appropriate here?". The way the Rails project has handled questions like "what kind of modularity is appropriate here?" seems less deep to me, in comparison, and less well-balanced.
Any decent post mortem needs to also consider what, if anything, Rails lost as a result of its merge with Merb. Matt Aimonetti said it well:
the lack of competition and the internal rewrites made Rails lose its headstart. Rails is very much HTML/view focused, its primarily strength is to make server side views trivial and it does an amazing job at that. But let’s be honest, that’s not the future for web dev. The future is more and more logic pushed to run on the client side (in JS) and the server side being used as an API serving data for the view layer... Rails is far from being optimized to developer web APIs in Rails. You can certainly do it, but you are basically using a tool that wasn’t designed to write APIs and you pay the overhead for that.
The knee-jerk reaction to this might be that paradigms change, but the "new" evolution in the nature of web applications should not surprise you at all if you were paying attention during the browser wars of the late 90s, or if you were paying attention when Google bought Writely and turrned it into Google Docs, or if you thought about Microsoft's claim, to the Department of Justice and the courts, that the browser was part of the operating system, or if you read Bill Gates's essay Content Is King, written in 1996.
To quote some relevant commentary:
Microsoft is trying to provide web applications with the same performance as native applications...
This is exactly the nightmare scenario that Bill Gates, co-founder of Microsoft, feared would happen, that the web browser could substitute for the operating system, and that's why he aggressively went after Netscape Communications in the 1990s, resulting in an anti-trust conviction against Microsoft.
I had a brief conversation on Twitter with Avdi Grimm:
This conversation took place before the release of Rails 4. Mr. Grimm's prediction proved incorrect. Although Rails 4 brought plenty of incremental improvements, as well as much-needed concurrency support, it remains a framework based on assumptions about what web programming is which simply are not true any more. Rails 4 is certainly an impressive accomplishment, but it's not the most
innovative thing in the world.
Nor should it be, necessarily.
There’s nothing wrong or shameful with nailing a single use case, like VB did for Windows desktop or PHP for web scripts. It’s beautiful!— DHH (@dhh) December 29, 2012
I can't call PHP beautiful, but the basic sentiment is completely legit. But a lot of Rails developers have business models which require cutting-edge technologies. The cutting edge is also just a fun place to be. Here's what the cutting edge looks like in 2014:
This is Verold, a web app which competes with Unity, Cinema 4D, and Maya.
Returning to my discussion with Avdi Grimm, I said this:
I was just being nice. Rails might never recapture the lead, unless Rails core undertakes some very serious re-examination of the project's design assumptions. That's hard to do, and they're probably still tired from the Rails/Merb merge. And Mr. Hansson may not decide to do the same kind of serious, difficult, incisive thinking that he did back in 2004, when he wasn't a millionaire and he had to prove himself. He did some amazing work in his early 20s, during a recession, when everybody works harder than normal, but he may not want to put his promising racing career on hold for a couple years so he can deal with new technological issues which he doesn't seem to understand or need to care about.
And to be fair, that's not a reasonable thing to expect from him, or indeed anybody. But it does contextualize his recent keynote presentation, at RailsConf 2014, about the alleged demise of TDD.
As Mr. Hansson and his co-author put it in their book Getting Real:
One bonus you get from having an enemy is a very clear marketing message. People are stoked by conflict. And they also understand a product by comparing it to others. With a chosen enemy, you're feeding people a story they want to hear. Not only will they understand your product better and faster, they'll take sides. And that's a sure-fire way to get attention and ignite passion.
In this context, TDD Is Dead just looks like attention-getting fluff to me. We live in a world where Nodecopter is old news. We have a framework which may in fact lag behind the cutting edge, and we have an unresolved tension about what the right level of modularity is in that framework. What's the value in dredging up a mid-2000s buzzword?
With apologies for the snark, I see an important lesson here.
Open source software has to balance two opposing forces: a strong, guiding vision in the service of a particular use case, vs. responding to, and respecting, the project's community. Rails favors the first force over the second. Quoting again from Getting Real:
Just because x number of people request something, doesn't mean you have to include it. Sometimes it's better to just say no and maintain your vision for the product.
Rails might be overbalanced in this direction, and underemphasizing the value of listening to its community. But you could easily argue instead that too many people tried to use Rails for too many inappropriate use cases. It's a judgement call. It will probably always be a judgement call. Rails seems to have chosen to err on the side of saying no, and that's a completely legit choice.
It could even be that the number one mistake in the Rails/Merb merge was that they didn't say no enough. If a company comes to you and tells you that they'll happily refactor your open source project for you, that might be a good time for saying no.
Keep in mind that when you send a pull request you're saying, "I wrote some code. I think you should maintain it."— Nicholas C. Zakas (@slicknet) May 29, 2014
Another lesson to learn might be that user experience design is a much more important part of API design than programmers have traditionally realized. For the sake of argument, let's take a position which is so extreme as to be silly, and agree (for the moment) that nobody should ever have used Rails to make any kind of app other than a Basecamp clone. Let's say Rails is appropriate for one use case and one use case only. The question then becomes, why did so many people misuse it for so many additional purposes?
Maybe because it's a damn good idea to prioritize developer happiness, and treat API design the same way Apple treats user interface and product design. If your technology makes people's work fun, they're probably going to embrace it.
If you're building the next big thing in open source, or trying to, please remember this.
You might also like Rails As She Is Spoke, my book about Rails's design.