Wednesday, September 26, 2007

Training: Give It & Get It

Tomorrow I'm hopping a plane to Philadelphia - from the city of angels to the city of brotherly love, it should be one poetic journey - where I'll be speaking at Ruby East on ways to kick ass with IRB. I did a nice dry run of my presentation tonight at the Pasadena Ruby Brigade, which was fun. Amy Hoy will be there, by the way - only fitting as a post on her blog set off my IRB obsession.

I've met Amy before - at OSCON, RailsConf, and Canada on Rails. Obviously, I go to a lot of conferences. I think everyone should. The huge benefit from conferences is peer pressure. If you read a lot of blogs and every blogger you read tells you to use Cruise Control, you'll get on it sooner or later. If you're sitting at a table with a bunch of really smart people, and a really smart person asks, "Who uses Cruise Control?", and every single hand but yours goes sailing into the air, it won't be a sooner or later thing. You'll remember that moment of shame, and you'll set up Cruise Control soon.

The truth is every developer has some habit, practice or idea that they know they should use and which they're going to start using, any day now, as soon as they get around to it. Conferences can be a nice kick in the butt that way. It's not just the shame method; it also works in reverse. If you meet people who think you've got something to say, and you mention you're not yet up to speed on some technology they think you're easily smart enough to be using, that's a much nicer kind of kick in the butt.

Conferences also, more simply, teach you new stuff. It's good to learn new stuff. The big secret of learning is that teaching is one of the best ways to learn - which is why I think every developer should present at conferences. People think if you're presenting at a conference, you must be an authority on the subject, but the reality is, the presenter isn't really a teacher. The presenter is just the lead student. If you've got questions you want answered, the best way to ask a large number of people for those answers is to show them the answers you've figured out for yourself so far.

Also, it's wise to get as much training as possible. It isn't really possible to be done learning if you're a developer. (In fact, that's probably the best thing about being a developer in the first place.)

Derek Sivers' Switching Back To PHP Isn't Even About Languages

I bookmarked Derek Sivers' recent blog post giving 7 reasons he switched back to PHP from Rails. Turns out I wasn't alone: 1,050 other people on del.icio.us bookmarked it as well. Since then people have been getting all worked up about it on Ruby discussion lists. Obviously, if over a thousand people bookmark something, and spend time discussing it, then people think it's a big deal. But people are wrong. The post didn't really even have 7 reasons. It just had one:

OUR ENTIRE COMPANY’S STUFF WAS IN PHP: DON’T UNDERESTIMATE INTEGRATION

That's about it. If you're looking for all killer, no filler, Sivers' article won't deliver. There's one important point - integration is hard - and one cool insight - "languages are like girlfriends, they're better because you are better" - and there's also a whole bunch of extra words.

Last year Chad Fowler wrote about the Big Rewrite and why it always fails.

You’ve seen the videos, the weblog posts and the hype, and you’ve decided you’re going to re-implement your product in Rails (or Java, or .NET, or Erlang, etc.).

Beware. This is a longer, harder, more failure-prone path than you expect.

Throughout my career in software development, I’ve been involved in Big Rewrite after Big Rewrite...In many cases, these Big Rewrite projects have resulted in unhappy customers, political battles, missed deadlines, and sometimes complete failure to deliver. In all cases, the projects were considerably harder than the projects’ initiators ever thought they would be.

This is not a technology problem. It’s not at all Rails-specific, but being in the limelight these days, Rails implementations are both very likely to happen and very risky right now.

This is exactly what happened to Sivers. He attempted a big rewrite in Rails and later abandoned it for an incremental rewrite in PHP. The Big Rewrite doesn't work. Incremental rewrites do work. But that doesn't say anything about Rails or PHP. You can fail with Rails and succeed with PHP for reasons that have nothing to do with either language.

For some reason, few people in programming study what other programmers have done before, and so, given that the majority of us don't study history, the majority of us are doomed to repeat it. That's fine. If you don't want to study history, go ahead and repeat it. But some of us do study the history of what we do, and some of us have already learned these lessons the hard way.

Personally, I think this failure to study history is a serious shortcoming.

When you think about the reams of money companies go through every time an inexperienced programmer or manager learns this lesson the hard way, and you put an actual price tag on that experience, and then you compare it with the price of a freaking book, you kind of have to wonder what's going on with people in our industry.

Anyway. For what it's worth, the next few days are going to be a good time to write code (or go to conferences!) but a terrible time to read e-mail lists and blogs. There's going to be a lot of discussion about what Sivers' post "means." I'm as guilty as anyone else of responding to the hot topic of the day, but if I see any more discussion of this post, I'm just going to read some programming history on Wikipedia (or even - gasp! - write some code). My suggestion: the moral of the story is that many programmers have a shallow and irrelevant fixation on languages, and don't spend nearly enough time learning the history of what we do.

Update: Raganwald noticed this too.

Sunday, September 23, 2007

Code Fest @ Ruby East

There's going to be a Code Fest following Ruby East, on Saturday. It'll be connected to Ruby For Change and have some kind of excellent saving the world angle, although what specifically I'm not quite sure. I've had an unbelievably horrible flu, so I'm a bit fuzzy on everything right now. Actually it started out hellish and then quickly turned into the type of thing where you sleep constantly. I can stay awake for about four hours max. Anyway, hopefully I'll be back to normal in time for Ruby East. :-)

Wednesday, September 19, 2007

Should I Owe DHH Money?

I have an utterly heinous flu. As such, even though I've got a ton of work to do, I really can't concentrate on anything more challenging than TV. Even reading tires me out. So I've been watching a lot of TV. I just caught a show where a woman I know is one of the stars. I met her in an acting class in Los Angeles a few months ago. (It isn't a huge show, but whatever. She's great.)

Here's something about the economics of show business. It's absolutely superior to the economics of technology, unless you're the founder of a successful startup. Say this woman I know, her show becomes a hit, and it goes into syndication. She will be paid a particular sum of money every time an episode airs. This income is called residuals, because it's residual income from work she's already done. In some cases, that income can add up to a lot of money.

Why doesn't this happen in programming?

Can you imagine the type of money David Heinemeier Hansson or Brian Behlendorf or Matz or Larry Wall would be making, if they received residual income every time somebody used something they made? Obviously, they do receive residual prestige every time this happens, and it happens very frequently.

But every time there's a tech boom, recruiters recite this platitude, that an amazing programmer is 10 or 20 or 100 times more valuable than an average one. Weirdly, however, their estimate of value never translates into a corresponding economic valuation. They tell us our work is 100 times more valuable, and then they offer us 1.25 times as much money. Somewhere in there, the equation gets damped significantly. However, if you got a residual check every time anybody used your code, the work of amazing programmers would be at least 10 times more financially remunerative than the work of average programmers.

The moral of this story is for tech recruiters. We are precise people, and we understand math. Please don't tell us that you think our work is 100 times more valuable than somebody else's unless you plan to give us 100 times as much money. Just don't do it, because we won't believe you. We can multiply in our heads. We don't need calculators to see that you're lying. And we are unbelievably picky about numbers and logic. We're more offended by the sloppy thinking than the missing money!

(I say "us" not just because I think I'm an amazing programmer, but also because I know every other programmer out there thinks that they're an amazing programmer too.)

Anyway, as is often the case, the obvious moral is infinitely less interesting than the puzzling question. Why aren't residuals the norm with code? Or, more accurately, why are the residuals in tech paid in prestige, rather than cash?

It's interesting to imagine what would happen if residuals were the norm in programming. I think the first thing that would happen is that the number of superfluous, silly, bullshit companies that obviously must have been started on a drunken bet would plummet. Except the very reason for this effect, that writing Web sites would suddenly become expensive, not only totally kills the whole dynamic of development on the Web, it also runs head first into a great essay by Clay Shirky explaining why micropayments will never, ever happen. And of course, if people were getting code residuals, those would be B2B micropayments.

Shirky's argument is so persuasive that he ends up wondering if the rise of bloggers means the end of the film industry:

This change in the direction of free content is strongest for the work of individual creators, because an individual can produce material on any schedule they like. It is also strongest for publication of words and images, because these are the techniques most easily mastered by individuals. As creative work in groups creates a good deal of organizational hassle and often requires a particular mix of talents, it remains to be seen how strongly [sic] the movement towards free content will be for endeavors like music or film.

However, the trends are towards easier collaboration, and still more power to the individual. The open source movement has demonstrated that even phenomenally complex systems like Linux can be developed through distributed volunteer labor, and software like Apple's iMovie allows individuals to do work that once required a team. So while we don't know what ultimate effect the economics of free content will [have] on group work, we do know that the barriers to such free content are coming down, as they did with print and images when the Web launched.

I can tell you with certainty this much: Shirky's argument, while brilliant, is incorrect. His logic is flawless, but the thing he describes as impossible already happens every day. Micropayments have been happening regularly for nearly a hundred years. It's just that they don't happen in technology, and people outside tech use a different term for them: residuals. Some people buy swimming pools with their residuals; other people buy half a Nestle Crunch with their residuals, and pay for the rest of it with the money from their day job. So residuals span the gamut, from the micropayment to the megapayment.

By the way, the economics of TV on your iPod, or DVD, or the Internet, or anywhere outside of TV are very different from the economics of TV on TV. The Writers' Guild in Hollywood is amping up for a strike, because the studios give them bullshit residuals on DVDs, and are trying to get an even more one-sided deal on downloads.

I don't really have a conclusion, it's more a question. I know there's an economist who argued that software should be free before he ever heard of the free software movement. He came to the conclusion on purely economic grounds and was quite cheerfully surprised when somebody told him that the thing he had predicted for some day in the future had already started to happen. I forgot the guy's name (sorry), but obviously he would argue against the idea of residuals for software. In practice free software is very frequently superior to licensed software, and the theories of the free software movement explain very cleanly why. So I think I probably don't owe DHH money, but I'm really not sure.

Gentlemen, Start Your Unit Tests!

David Heinemeier Hansson rounded off his keynote by announcing the release of a Rails 2.0 Preview Release (how Microsoft-ish does that sound?) either during or shortly after the conference.

From an interesting summary of DHH's keynote at RailsConf Europe.

Joel Finally Posts Something Worth Reading Again

It had to happen sooner or later. A very useful history with insights into the future. Required reading if you write JavaScript at all.

Crazy Sexy Cool: Self-Migrating Schemas

What migrations should have been all along.

The Googlebot May Write Software

This is just a research idea, but it's pretty easy to see how it could pan out. If you can google for code based on a microspec - a BDD-style spec - then one day you can write code to google for code and build new code out of the code it finds.

I for one welcome our new code-scavenging overlords.

Tuesday, September 18, 2007

RubyConf 2007 Schedule Online

Haven't heard any announcements about this - although I've been popping painkillers and antibiotics and sleeping instead of checking e-mail - but the schedule for RubyConf 2007 is online and it's going to be awesome. Highlights include Evan Phoenix on Rubinius, Marcel Molina on code beauty, Jim Weirich on class design, Ben Bleything on controlling electronics, Jay Phillips on Adhearsion, Ryan Davis on "hurting code", and the two Daves of RSpec (Chelimsky and Astels) on RSpec and BDD. Very much looking forward to this one. (Also, I don't want to seem like a player-hater, but I'm pretty glad I didn't see the word "Rails" anywhere.)

Muppet Balls

I just found out that my session at Ruby East is concurrent with Obie Fernandez' session at Ruby East. I'll be talking about kicking ass with IRB. Obie will be talking about Rails and ActiveRecord. At best, I'll be talking to an empty room; at worst, I'll ditch my own session to hear Obie's instead. This is like Mike Tyson vs. a parakeet.

Monday, September 17, 2007

Rabbit Hole Screencast

ssh client for the iPhone, built for Rails Rumble, fully underway and nifty.

Vote for Rabbit Hole!

Steampunk Is A Subset Of Revisionist Sci-Fi, Which Is Itself A Subset Of Fantastical Revisionism

If you look at Sky Captain And The World Of Tomorrow, you can see very easily that steampunk is just a subset of some larger category that Sky Captain is also part of. That category is revisionist sci-fi. If you expand the category further to fantastical revisionism, it suddenly includes everything J.R.R. Tolkien ever did (and, for that matter, The Flintstones).

In practical terms, many more people in our society get their information about the past from fantastical revisionism than from actual history. I think this is probably because the semiotic functions of stories about the past are more powerful culturally than the literal record of events, and semiotic purposes are actually better served by a medium which does not require literal accuracy.

Thursday, September 13, 2007

Rails Rumble: Vote For Me!

Actually, I was hardly involved. But vote for this anyway.

The Rails Rumble is a contest to see who can code the best Rails app in the space of one weekend. I was so exhausted from working too hard that I made hardly any contribution to this project, which was pretty lame of me. But I was still the guy who came up with the idea, so maybe that's something. The idea was an ssh client for the iPhone, called Rabbit Hole. The execution came from BJ Clark and Pat Maddox, and it's pretty rad.

Remember: vote early and often!

BCrypt Convenience Methods

Coda Hale's bcrypt-ruby is the Rails' developer's best solution to the rainbow tables scare, but its API is a little counter-intuitive to me.

My current project has developers scattered throughout Minnesota, China, and various regions of Los Angeles. We're not all in the same place, we don't all speak the same languages, and even those of us in the same city could have trouble linking up on any given day (LA is a big place), so intuitive code is a major benefit.

Consequently, I added a couple convenience methods to BCrypt that I think will make it a little easier to use. Check the Pastie by clicking the link.

Wednesday, September 12, 2007

Autoload ActiveResource Accessors & Association Classes

If you're trying to use ActiveResource in a Rails app as a plug-and-play replacement to ActiveRecord, it really doesn't work like that. One way you can make your life a lot easier is to build a metaprogramming/reflection process into ActiveResource based on the process inside ActiveRecord.

As it turns out, this is easy to implement. ActiveRecord loads its database's schema on class definition using a SHOW FIELDS command. If you have control over the application ActiveResource is hitting, you can add a schema URL pretty effortlessly. Then, on the actual ActiveResource app, you just put an alias_method_chain on inherited in ActiveResource::Base. Add in an autoload method; have that method do the work of an attr_accessor, except don't actually use attr_accessor, because you need to put the attribute in the @attributes hash for it to be any use.

In some ways, this is like using an WSDL file, and it might give you the creepy-crawlies for that reason. In fact, however, it's a lot more elegant. It's dynamically generated, and its syntax is very clean, so you have none of the maintainence and coupling issues WSDL gives you.

It's also much more economical than attaching attr_accessors all over the place with class << self every time you need to use a form handler, and better from a performance standpoint than Rails' default behavior. If you're using ActiveResource stock, unmodified, out the box, you get a method_missing approach to @attributes which ActiveRecord abandoned a few versions ago in favor of more performant code-generation techniques, i.e., explicitly defining and attaching new methods, rather than passing all the work to method_missing. In a sense, method_missing is a lot like the iPhone when it was first released: amazingly cool, but way too expensive. (Fortunately, method_missing was never saddled with an AT&T service contract.)

If you've been following along, or attempting to actually code this from my description, you've probably noticed the big flaw: it doesn't account for association classes. That's pretty easy to fix. Use has_many (and its big family) in your ActiveResource models, and add the has_many family of class methods to ActiveResource::Base. The best way to implement it is to have it do exactly the same thing your autoload code already does in generating its accessor methods. Essentially, the only difference between has_many :widgets and attr_accessor :widgets should be that has_many also updates @attributes. The smart thing is to factor that out, so you can have both the autoloading code and the has_many family hit a class method with a name like add_attribute.

As you might guess, I have some code to show here, but I can't show it at the moment, partly due to corporate politics (LAME) and partly due to not having actually finished it yet. I literally just thought of the has_many part as I was writing this. We've got a lot of this working at my current project, however, and we should have it running in production very very soon.

Web Apps Are Becoming Shell Scripts, Adobe Sucks, and Robots Are Awesome

Note: I wrote this a few days ago. I can't figure out what my point actually was, but I'm posting it anyway. If you're looking for my best post ever, this probably isn't it.


Back when OS X was brand new, and Windows Vista was just a prolonged joke called Longhorn, a tech writer or blogger who I've since forgotten made a prediction which turned out to be very accurate. At the time, the graphics processing power now standard on modern computers was new. It was a huge, staggering leap forward from what had been normal up to that point. The conventional wisdom was that Photoshop would never crash again, at least never again on a Mac. But if you made the mistake I did, you went and bought Photoshop Elements, and you found out how wrong this conventional wisdom was.

Photoshop Elements is a Mac "Universal binary" which runs under the Rosetta interpreter. It's terrible. It's really very, very bad. I'm not sure whether to blame the Universal binary or not. It gets window placement very, very wrong, occasionally putting toolbars that are supposed to span the top of the screen about a hundred pixels lower than normal. It also occasionally merges portions of the chrome which were supposed to be distinct. These are code problems, but it's not just Adobe's coders and quality control who didn't do their jobs well. Elements' UI is so aggressively dumbed-down that if you resize a window, choose some other window, and then go back to the window you resized, the mere act of selecting the window auto-resizes it to some Adobe default, throwing out the size you chose. A feature like that isn't just useless, it's also annoying and aggressive.

Two years ago I got a DVD authoring package from Adobe. It suffered from the same problems - bad code, bad UI decisions, and unnecessary "features" which only annoyed you and never served any useful purpose. At the time, I took this to indicate that Adobe had reached a point in its lifespan as a corporation where it no longer had the clout to demand competent people, and further, had gotten so infested with incompetence that most features were added by people who simply wanted to prove that they had been awake during the workday. You can take Photoshop Elements as a sign that the problem has gotten worse, so much so that I would strenuously avoid accepting any job offers from Adobe, if you get them, or even bothering to consider Adobe Web development technologies for even a second. I would also be very careful about wasting any money on Adobe software.

Photoshop hasn't received any new features worth writing home about since version 5 or version 6. Indeed the only reason I bought Elements was because I switched from the PC to the Mac and couldn't keep installing the version 6 or 7 I've been using for at least five years. But even though it hasn't gotten any new *useful* features, it's gotten a ton of new features. Adobe comes from the old school, the shrinkwrap era. In that market, you have to keep adding features to stay afloat, even when those features are total unnecessary bullshit.

This brings back to the power of modern graphics processors, or at least brings us in sight of it. Adobe today mostly just adds total unnecessary bullshit to quality software. With Elements, they're approaching a point where the bullshit outweighs the quality. If or when that happens, the result is customers abandon the software. I'm certainly going to look into my options before replacing Elements with anything from Adobe. Elements is so bad it's insulting that they charged me money for it at all, and insulting your customers doesn't build brand loyalty.

Basically, everybody who works at Adobe today is coasting off work that Adobe did five, ten, or fifteen years ago. They're the Paris Hiltons of the software world, living off the inheritance they lucked into. Inevitably, they'll spend that inheritance down.

The normal response here, among geeks, is blaming marketing. Marketing added unnecessary bullshit to sell the new version. This is exactly what the tech writer above did, when explaining why Photoshop would still crash sometimes, even with the graphics power on modern machines. Marketing will add drop shadows on the active application window, he or she said, and transparent Terminal windows. This extra processing will eat up all the extra power, and application performance will therefore remain about the same.

Although that's exactly what happened, blaming marketing isn't entirely fair. In the case of Photoshop Elements, adding bullshit features to once-great software is inevitable in the context of the shrinkwrap business model. That's just how the business model works. In the case of graphics processing power being wasted on gratuitous eye candy, there's a certain inevitability there as well. Economists call it the tragedy of the commons.

The tragedy of the commons, essentially, is that resources which are freely available to anyone will be squandered by everyone. It's pretty easy to apply that to transparent Terminal windows and adding gratuitous drop shadows to active application windows. Processing power which is freely available to any application will be squandered by every application.

So, here's my argument so far:

1. Fuck you right back, Adobe
2. The tragedy of the commons

Let's move on.

What does all this have to do with Web apps, shell scripts, and robots? The commonality is economic analysis. Web apps are becoming inexpressibly cheap. When I first looked into Ruby on Rails, I had already heard of it, but not paid a lot of attention to it. I checked it out as a favor to a friend. She was nontechnical, hiring somebody to develop a Web site for her, and one candidate was selling her on Ruby on Rails. She needed to know if it was good.

She and I had worked before at a small Web design shop where the owner would charge three days' work to set up an admin interface to a database. It was literally the same thing as Rails' scaffolding. Three days' work replaced with one line of text on the Unix command line. I told her, yeah, it's good. It's really fucking good.

Reducing three days' work to one Unix command is the core use case for technological innovation. Suddenly things which were expensive become cheap. That means you can do new stuff.

You might think that the incredible ease and grace of developing with Rails would lead to a world where Web developers have incredibly easy, graceful lives, building incredibly beautiful Web apps that sparkle in the breeze like some Elven tower out of Lothlorien or the Lands of the East. You might think powerful graphics processors would mean an end to Photoshop crashing. Think again.

Web apps, today, are as cheap as shell scripts. This means people will start to throw them at problems as casually as they throw shell scripts at problems. If you want to see what the future holds for Web apps, ask yourself where incredibly cheap application generation technologies could be useful.

This is where the robots come in.

A couple years ago a robotics research project used a Web app as the user interface to a small fleet of tiny robot helicopters. Each helicopter had an onboard Linux box, and they chose one helicopter, made it the researchers' liason, and put a Web app on its Linux box. They didn't do it because it was nifty, or cutting-edge, or exciting. They did it because it was cheap. You get standard UI, standard request/response handling, accessible to any number of computers, all of which are guaranteed to have the client software, all over WiFi, practically free.

A lot of people are predicting a robotics boom, and if it happens, these things are going to eat up computing resources like fat kids gobbling down Cheetos. For instance, IPv6 is supposed to give us all the IP addresses we'll need for years, but the very nature of IPv6 - it makes IP addresses freely available to anyone - means that IP addresses will be subject to the tragedy of the commons. If there's a robotics boom, cheap robots will squander cheap IP addresses by the millions. At the same time, putting Web servers on a cheap robot will mean that anybody with a laptop or a mobile phone can access the UI. Their ubiquity and cheapness make Web apps inevitable as the UI platform of choice for robotics.

You can get a Linux box the size of a pack of chewing gum for $100 - they're called Gumstix. They're made for the robotics hobbyist community, and they ship with Ruby installed. The Lego Mindstorms robotics kit - marketed to ages 11 and up - already has enough processing power to host a miniature JVM which supports APIs for graphics, sound, navigation, mapmaking, and subsumption architecture. You can connect it to the Web using Bluetooth and an open-source remote control package called iCommand, or, for that matter, using low-level Bluetooth or USB libraries. And this is just in a Lego kit marketed to children ages 11 and up.

In conclusion:

1. Robots are cool.
2. Web apps are becoming incredibly cheap.

Wednesday, September 5, 2007

Safari RAM Allocation - Watch Out

On my system (a humble 2GHz MacBook with a single GB of RAM) Safari frequently outperforms Firefox in terms of speed and responsiveness. However, its RAM profile is appalling.



I got this after opening maybe five windows, most with just one tab, and one with about six or seven tabs, including two instances of Google Maps and one of GMail (which we're now apparently supposed to start calling Google Mail).

What prompted the memory profiling, ironically, was the Apple Store. Despite discovering the super-nifty new iPod Touch - an iPhone without the phone - my visit to the Apple Store site was not a resounding success, because it knocked Safari on its ass. The Apple Store window froze, and in the process froze up every other window, including several Web apps I was using.

The crazy part of all this is that I closed every Safari window a few minutes ago but left the app running just to see what it would do. So far, its memory consumption has lessened as a result of having no windows open at all by a paltry 11MB. That is to say, with several windows open, it was using 300MB of RAM, and after closing all of them it's now consuming 289MB of RAM.

Anyway, I don't really have any specific point here, except that Safari RAM consumption is completely insane.

Tuesday, September 4, 2007

Malfunctioning Vote-Counting Machines Are ALWAYS A Sign Of Corruption

This is a message from programmers to normal people.

Dear Normal People,

You often hear about voting machine malfunctions, both here in the US and overseas. I think as a computer programmer it's pretty important to point out that voting is not a difficult problem from a technological standpoint. I think pretty much any one of the programmers who read my blog regularly could build an absolutely secure voting machine in a matter of hours, and their only challenge would be staying awake while doing something so incredibly unchallenging and therefore boring.

Every story you see on the news about voter fraud is an instance of corruption at work. There are no exceptions. The only way systems which handle such incredibly simple problems can malfunction so consistently and spectacularly is if they are designed to do so. It's like putting square wheels on a car. Fucking up something that simple is harder than getting it right. It takes effort. It can only happen deliberately.

Every single time a voting machine malfunctions, somebody has ripped you off. Manufacturing fradulently "malfunctioning" voting machines has become the new method of choice for undermining democracy. Every time it happens, you should put somebody in jail.


Sincerely,
Computer Programmers

Monday, September 3, 2007

require :gem_name

If you use Rails a lot you get very very used to using symbols as if they were keyword arguments. If you want to do this:

require :capistrano

instead of this:

require "capistrano"

it only requires a tiny change to gems. Go to line 27 of custom_require.rb (the path on a standard OS X box configured with port is /opt/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb) and change this:

gem_original_require path

to this:

gem_original_require path.to_s

It's just syntactic sugar, but it feels more intuitive to me.

Update: I expanded this into a new version of the require method in custom_require.rb which also takes arrays.

Update again: damn it, this breaks sometimes. I don't know why.

Saturday, September 1, 2007

Ruby Language: Logo Design Contest

Check it out! Big cash prize, huge resume props. Very cool.

Improved Auto-Pastie IRB Code

This code makes it possible for you to paste stuff to Pastie directly from IRB without using the mouse. It's a refactoring of code I posted a while ago on my tumblelog (can't find the exact link). The refactoring exposed an additional feature. I was using OS X clipboard code from Projectionist and I realized the Pastie method would be cleaner if I broke out the OS X clipboard stuff into its own object.

The cool part about that is that you now get the object when IRB loads.

This means you can directly do things like

MacClipboard.read

and

MacClipboard.write(stuff)

which of course makes possible the use case where you copy some code you've got in a file, switch over to IRB, and go

eval MacClipboard.read

to see what the code does. And the use case where you write some crazy method to auto-generate image data, and then go

MacClipboard.write(image_data)

before switching into Photoshop.

Actually, I haven't figured out that last one, but it's probably possible. I thought it would be as simple as

MacClipboard.write(File.open("picture.jpg") {|f| f.read})

but in fact that method returns a string. Also, I don't have Photoshop on this computer. Obviously, however, any method which generates a URL can pass that URL to OS X's command-line open command - in fact, that's another tiny new feature. Now when you auto-generate a Pastie from IRB, the method opens that Pastie in a new Safari window.

Anyway, the other neat part of this is that a refactoring exposes new functionality. Technically that's kind of a paradox, since the whole definition of refactoring is changing the design without changing the functionality. Definitely a cool paradox, though.