Monday, February 19, 2007

Sapir-Worf In Action

One idiosyncrasy I've noticed I have as a coder is referring to various processes halting or failing as being "killed." I've been saying it for years, I noticed about a year ago that I was really the only person I knew doing it, and I just figured out, finally, where it comes from.

In Perl it's very common to use die("some message") as a de facto breakpointer. You think, well, these method names, they're just metaphor -- but somewhere in my brain the metaphor took hold. Fast forward a few years and every bug gets the violent descriptors "this is killed here" and "that is killing it." After a while the habit started weirding me out, so I've basically stopped, but I still do it from time to time when I'm thinking about something else.

It's totally idiosyncratic and anecdotal, but it makes the Sapir-Worf Hypothesis a lot more credible to me.

Calls to mind something Gerald Sussman said:

It is not exaggeration to regard this as the most fundamental idea in programming:

The evaluator, which determines the meaning of expressions in a programming language, is just another program.

To appreciate this point is to change our images of ourselves as programmers. We come to see ourselves as designers of languages, rather than only users of languages designed by others.

In fact, we can regard almost any program as the evaluator for some language...Seen from this perspective, the technology for coping with large-scale computer systems merges with the technology for building new computer languages, and computer science itself becomes no more (and no less) than the discipline of constructing appropriate descriptive languages.

I think this goes a long way towards understanding why people feel studying language design makes you a better programmer. It certainly makes me a lot more interested in RSpec than I've been in the past. I saw Dave Astels speak about this at Canada on Rails, and I was extremely skeptical. I think at this point that I just failed to see the point, but it's a valid point all the same.

2 comments:

  1. Re: A program is just the evaluator of a language

    This idea has been around in computer science for a long time. Dr. Dijkstra was talking about this 20 years ago, though his opinion was in the minority. From his speech, "On the cruelty of really teaching computing science":

    "What is a program? Several answers are possible. We can view the program as what turns the general-purpose computer into a special-purpose symbol manipulator, and does so without the need to change a single wire . . . I prefer to describe it the other way round: the program is an abstract symbol manipulator, which can be turned into a concrete one by supplying a computer to it. After all, it is no longer the purpose of programs to instruct our machines; these days, it is the purpose of machines to execute our programs.

    So, we have to design abstract symbol manipulators. We all know what they look like: they look like programs or --to use somewhat more general terminology-- usually rather elaborate formulae from some formal system. It really helps to view a program as a formula."

    This is just my opinion, but it seemed to me his emphasis throughout his career was on changing the focus of computer science, from a discipline that believed CompSci students should learn how the computer works and adapt ideas to that paradigm, to programmers expressing their ideas using formal logic and having an evaluator parse and execute that logic.

    Alan Kay has talked anecdotally about this. In a recent blog post I quoted him talking about an "a-ha!" moment he had in the late 1960s when he saw the Lisp spec. written in Lisp. He was talking more in terms of how to construct an evaluator. It sounded like this idea was the basis for Smalltalk when it was first written in 1972.

    What Sussman is talking about in a more modern sense sounds like what Charles Simonyi has been working on at Intentional Software.

    I used to think that Martin Fowler was talking about this, but I think he saw things in a more micro scale, with DSLs.

    With the discussion of DSLs the idea has been spreading around that the future of system development is language engineering.

    ReplyDelete
  2. I didn't get the Sapir-Worf reference until I watched the "Practical Common Lisp" video you referenced in a later post. I've read about the same thing you're talking about, that studying "non-blub" languages will make you a better programmer. That's what got me started on re-learning Lisp and Smalltalk. In a way, I feel like I've swallowed the Red Pill.

    Seibel kind of questions the idea, since "All languages are Turing complete. All languages can be translated from one to the other." Everything that can be done in Lisp can be done in other languages--the catch is, depending on the language feature you're using, it can get very difficult to implement. So the question becomes would you rather spend your time wrestling with a weaker language to get something powerful done, or are you not going to bother with it, or even see that it's possible?

    I'm reminded of that scene where the Oracle says to Neo, "What's really going to cook your noodle later is would you have knocked it over if I hadn't told you about it?" There were certain concepts that I had become unfamiliar with after having programmed in C, C++, and C# for so long. Once I picked up Lisp again, I discovered, "Oh. I don't have to spend time constructing a list node by node. I can just write one out if I want, and I can traverse it easily as well." Another was closures. I heard about how .Net 2.0 was introducing anonymous methods. I thought, "What's the point?" After I rediscovered closures in Smalltalk, I get the point now!

    Seibel talks about continuations in relation to exceptions in Lisp, and he makes a convincing argument. Rather than unwinding the stack and losing all that state, until the exception is handled, why not just send the exception up the stack, keep the stack as-is, and let the exception handler choose how much to unwind it? What a novel idea! He said, "You can implement this in Java," but he also said it's gonna get ugly. Lisp just does it more elegantly.

    I think ease of use has something to do with it. You're less likely to use a powerful technique if it's difficult to implement. I think that goes without saying. If it's difficult to implement it no longer feels powerful. The exception being if it becomes necessary, in order to meet a requirement. That's where learning this stuff becomes useful.

    As for me, I think studying more powerful languages has broadened my horizons. I could've gotten all this stuff I've learned by reading books, but actually using it cements it better in my brain.

    ReplyDelete

Note: Only a member of this blog may post a comment.