Monday, March 26, 2007

A Conversation With Reg Braithwaite

I had an interesting conversation with Raganwald.

He sent me an e-mail with the subject "Bram's Law explains the FizzBuzz phenomenon." It pointed me to a blog post explaining Bram's Law:

The easier a piece of software is to write, the worse it's implemented in practice.

I replied that this has always been my fear for Rails; that in five to ten years, the worst jobs you could get will be Rails jobs where you're maintaining stuff built by non-programmers who figured Rails made programming so easy that they didn't really need to understand what they were doing.

The conversation then moved to Avi Bryant's post on onions, and the questions it raises -- especially the question of to what extent Rails is simply doing onions better than any other onions framework out there. And then I kind of went off on a tangent:

We had a pretty interesting workshop here the other day. I was supposed to just give my "HREF Considered Harmful" presentation, the one from the screencast, but the projector was all screwed up and I ended up having to just make stuff up and write it on a whiteboard. Some of the stuff I made up didn't really make sense, so I got challenged on it. One person was like, couldn't you just emulate Seaside's componentized nature by replacing Rails controllers and
views with Builder templates containing objects with built-in render methods which call other Builder templates? And the thing was, it'd work, in fact you'd basically get everything but continuations, but the question was whether it'd be worth it if you didn't get Seaside's session management, and whether the session stuff would or wouldn't be a nightmare in any language but Smalltalk.

The idea was that Rails' template system is a big fat smelly onion. We ended up with a design that would probably work much better than Rails, in fact, for Seaside-style development, and still have all the advantages of Ruby over Squeak -- easier DB/Unix integration, more developers, etc. Although the flipside to Bram's law is that any time a really great language is used to solve a moronic problem you suddenly get to choose from ridiculously qualified developers to solve that problem. The Python paradox.

But the flipside to the onions thing, he also mentioned meaningful params as an onion. Some people have told me they think Avi Bryant doesn't understand the Web, and I kind of suspect it was because when you throw away meaningful params, you destroy interoperability. That's why the Rails REST stuff is all about cooking APIs directly into everything you code. Everybody loves mashups, and certainly I was using a Google Maps mashup to predict traffic on the LA freeways before Google itself added that feature. It was a win for Google -- free R&D if it implements the idea -- and a win for every person in LA who ever saw it either way. On the other hand, I heard one of the Flickr people explaining that the downside of their APIs is that they have to be able to accomodate really bad programmers pinging their servers a thousand times a day simply because those programmers don't know any better.

Now I have to admit, this "framework" which would be better than Rails for Seaside-style development, obviously that's a bold statement, and obviously this thing is vaporware in the extreme. So take it with a grain of salt. Please take it with a grain of salt, because the last thing in the world I want is an army of angry recovering PHP junkies descending on me and explaining why Rails is the best thing EVAR. I know. Rails is awesome. It's beautiful.

And forgive my hubris; if it's any consolation, I stole the basic idea from a Rails programmer who was dissing Seaside (ironically enough) in a comment that was posted on Ramon Leon's blog.

However, there's one place where it's very easy to see the benefits of Seaside's componentized, templateless style. Picture a complex enterprise application which a large corporation uses to manage logistics like shipping, billing and accounting. Say this company wants to present, to its clients, a subset of the information available throughout this huge sprawling app. You can do it with things like render :partial and render :template -- but components are more elegant for this sort of thing. And this isn't an academic example; it was a task we'd faced and for which components would have been nice to have.

One interesting and terribly tempting misuse of reflection might be to write objects which had to_html methods, which consisted of render :template calls generated dynamically from the names of the object and action called. This would move code out of the templates and back into objects, even when that code couldn't be contained in any one model, kind of like Jay Fields' Presenter layer. But is it worth doing? The answer depends on whether Rails' template system really is a big fat smelly onion or not. Are there advantages to a template architecture that I'm unaware of? There might be. I don't know.


  1. Hey Giles, the way you're linking to me implies I'm the rails programmer dissing Seaside... would you mind mentioning that it's in the comments of this article you're referring too?

  2. By the way, a Seaside style framework on top of Rails Active record would be a site to behold. How about even, Seaside itself on top of Rails via Rails ActiveResources and REST capabilities. Except for the view layer itself, Rails fucking rocks, I'd like to see them glued together.

  3. I happen to like templates, it's a transferrable skill that I can use anywhere there's markup to be produced.

    Assuming templates is your thing -- you can always layer a different presentation mechanism on top of Rails -- then check out the capture() method. You can use it to build components that work inside the template, and even yield to blocks which in turn may call other components. So you get rendering from code without resorting to funky render :template trickery.

  4. Sorry Ramon! Corrected.

    On the Seaside/Rails layering idea, there was actually a post about this on the Seaside list, a guy who was layering Rails and Seaside together and had developed some kind of Seaside reader for Ruby marshal formats, or something. I was very interested in that but lost track because almost immediately after the thread started I came down with a pretty intense flu of some kind.

    Assaf, that's very interesting. I'm not sure where capture() is, I'll look for it. That sounds very interesting. Makes me wish I had more time to really explore this stuff and find a way to make it work, just to play with it.

  5. Giles,

    Not exactly a component, but a good example for using capture to output HTML and wrap respones from other calls:

  6. How about a mature component framework like Delphi's VCL on top of Rails? See

  7. I replied that this has always been my fear for Rails; that in five to ten years, the worst jobs you could get will be Rails jobs where you're maintaining stuff built by non-programmers who figured Rails made programming so easy that they didn't really need to understand what they were doing.

    I see what you're saying, but I wonder if your fear is overblown. The very fact that it looks easy to you is due in part because you're a skilled programmer who cares about his craft. The thing with dynamic languages is they're like Kernighan & Richie-style C (pre-ANSI standard): they allow you more than enough rope for you to hang yourself. The less skilled programmers don't like such languages, because they end up doing just that. In your link to "the Python paradox", Paul Graham returned to his "Java attracts unskilled programmers" argument. The reason this argument has legs is there's some reality to it. Even years ago I used to hear stories about this. The reason this is true is because of the very fact that Java has mechanisms built into it to prevent the developer from hanging themselves. It has strong types, strong scoping rules, mandates that you catch exceptions, only permits single-inheritance (though a lot of languages have this), and of course the big one is garbage collection. I have no idea, but it still may not have operator overloading, as some in the Java community would have a raging conniption if it did. And of course it doesn't have more advanced features like closures and continuations. The point is Java provides boundaries. There are some places where a developer just can't go. Ruby does not, even with the Rails framework.

    One thing that strong typing and strong scoping gives you, for example, is compile-time errors. In Ruby there's no such thing as a compile-time error--there's no compiler. Unit testing becomes a lot more important in Ruby than it does in Java, because types are hidden below the surface, and the scoping rules in Ruby are slim. It's easy to lose track of what type of object you're dealing with at any point in time, or for one object to alter something in another object when it shouldn't, if you're not disciplined about it.

    It's true, though, that an amateur Rubyist could create a real mess using just the Rails conventions, in order to try and get something complex done. The less skilled developers tend to stick to the framework as it's given to them and not stray from its parameters, even though this can end up making the job harder and more error-prone. I know, because I've fallen into this trap in the past.

  8. In your link to "the Python paradox", Paul Graham returned to his "Java attracts unskilled programmers" argument. The reason this argument has legs is there's some reality to it.

    It's a myth.

    The majority of developers strongly favor a language that provides job security with good benefits. Pick any mainstream language and you'll see that it attracts a fair share of "unskilled programmers", given the fact that most people are just average or worse. But go outside the mainstream, to environments where a language is chosen by the developers -- where Ruby is today, where Java was ten years ago -- and you will find strong bias toward early adopters, who tend to be much better developers than average.

    Paul Graham just used a simple statistical fact to make a point against Java. That would be the fate of Python if it ever became popular, or the fate of Ruby if it keeps growing at the current rate. It's the tragedy of being common.

    As for static typing, another myth created by people who divide the world into Java and Ruby, and argue that Java is better for large scale projects (unskilled developers is misnomer for "more than 5 developers on the payroll"). Step away from the debate and look at the world around, at the PHP and ABAP, the VB and VBA, the MUMPS and SQR, the REXX and the Perl, and what actually passes as programming languages in corporate world, and you'll that type safety is not a mandatory feature.

  9. assaf:

    But go outside the mainstream, to environments where a language is chosen by the developers -- where Ruby is today, where Java was ten years ago -- and you will find strong bias toward early adopters, who tend to be much better developers than average.

    About 8 years ago I remember talking to a friend about Java development. I naively thought that because Java forced developers to use objects, that every Java developer would program in the OO style. He pointed out I was wrong. He had already worked 3 jobs working on Java code previous developers had written. Each time part of his job was to clean up the atrocious code they had written. It was nowhere near OO, except where it was forced by the Java compiler, or the Java API. It looked like it had been written in C--badly (sans pointers, of course). He saw a Java program where all the guy had done was put all his code in the main() static method. The most common thing he saw was where they had written the program using straight linear logic, creating one massive object, with a static main(), and creating methods within this object to use as functions (as would've been done in C). He said every time the code was a mess, and it surprised him. So don't give me this line that "back before it was popular" life was good. What I took away from this was they were former C developers who moved to Java without understanding how it does things, or embracing the OO style.

    IMO, ANSI C saved unskilled developers. Granted, they had to be careful with pointers, but ANSI C put type safety into the language. C++ even more so. K & R C was one trap after another. The compiler didn't enforce type safety at all. I'm not even saying that unskilled developers are dumb. I saw some very smart people get tripped up by K & R C to the point that they never wanted to program in it again. I'm only using K & R C as an analogy to what I'm talking about.

    As for static typing, another myth created by people who divide the world into Java and Ruby, and argue that Java is better for large scale projects (unskilled developers is misnomer for "more than 5 developers on the payroll")

    If you want to try putting unskilled developers on a Ruby project be my guest. I think you'll see what I'm talking about. By "unskilled" I'm not talking about "more than 5 on the payroll". I'm talking about developers who have little experience with the language and the framework, who don't have experience with using formal logic, don't understand higher level programming concepts, and don't have a disciplined approach to development.

    As for VB, you do realize that the vast majority of VB developers depend on the VB IDE to get them through many tasks, don't you? They have Intellisense so they don't have to remember what functions/methods an object has, or what type it is. The tool tells them. It also does a kind of lint check on their code, making sure all their calls and references are defined. Secondly, in VB.Net (and I believe in VB 6 as well) they have what's called "Option strict" they can turn on and off, which if turned on enforces type safety.

    I admit ignorance about VBA. I haven't used it much. From what I understand it's VBScript, a late-bound language that does not enforce type safety. However, most VBA developers use the dev. tools that are built in to Office. I have no idea if they have Intellisense as well, but I suspect they do, again to help them out with these issues.

    You're right that PHP is the closest analogy to what I'm talking about with Ruby. There are IDEs for PHP though.

    I'm not ripping on dev. tools. Just saying they have help in areas that could trip them up. There's the flip side as well. As I said in my previous post, dev. tools can lead you down "the garden path" towards a structure that's a real hairball.

    At this point Ruby does have an IDE, but from what I've heard it doesn't have all of the reflection/debugging features that other IDEs have. A lot of Ruby developers use third-party code editors which provide none of the meta information. Ruby provides some tools that are very nice, taking out the busywork.

    I haven't heard of ABAP, MUMPS, or SQR (do you mean SQL?). So can't comment on them.

    I Have heard of REXX. I once knew someone who programmed in it years ago. He was a skilled programmer though.

    As for Perl, is there such a thing as an unskilled programmer who's worked with it? I doubt that. Perl operates by the same principle: enough rope to hang yourself with. My point with skilled vs. unskilled is that skilled programmers have a good understanding of the language, what it's capable of, and the discipline to deal with its lack of boundaries while at the same time creating something that's maintainable. Unskilled developers do not. This isn't to say they couldn't develop those skills, if they wanted to.

  10. These are awesome comments, I just want to say, keep it chill. "Don't give me this line" isn't necessary; intelligent people can disagree.

    Also, I have to tell you, Mark, there are definitely unskilled programmers who work in Perl. I've seen the results. Perl became very popular during the dot-com boom and lots of people learned it so they could get jobs. I was one of them, but I did a better job of learning the language than some. The problem might even be intrinsic to popular languages. I've seen very bad Java code which was written by C guys who wanted to get Java on their resumes and weren't interested in actually doing a good job with it. In fact I think the real problem you're identifying and Assaf's identifying too is that the hiring process for programmers often revolves around recent experience with language XYZ instead of more valuable things like craftsmanship, attention to detail, and a responsible attitude. Cursing programmers who jump bandwagons to make money is tempting, but what if they're starving and they need to feed a family? The real problem is that jumping bandwagons makes you money. Craftsmanship, attention to detail, and a responsible attitude are more likely to contribute to a successful project than recent experience in language XYZ, but the hiring process is usually fixated on recent experience, and as far as I can tell, that's not likely to change.

    The only exception is the 37 Signals rule to hire based on open source contributions. I think I've actually never made an OS contribution in my life, so it's a bit hypocritical for me to say this, but I actually think that rule is a fantastic rule. The flipside to the Python paradox is that once everybody knew Google preferred to hire Python programmers, lots of people in the Bay Area learned Python for that exact reason. (Call it the Python problem.) Likewise, hideous Java written by C programmers who want to put Java on their resume, but don't actually want to learn it, that's because Java became "hot," to use the language of tech recruiters. If OS contributions were the primary hiring metric, rather than recent experience in whichever language, hireability would be much more closely linked to genuine skill and ability. Maybe that'll happen; maybe not. I sure hope so.

  11. In '98 I was hired to train a team of COBOL mainframe developers to do Java. So I know the kind of COBOL-in-Java code we're talking about. Personally, I suspect they weren't even good COBOL developers to begin with, but I don't know enough COBOL to say that with certainty. Doesn't matter. Their managers wanted nothing else but to do Java, that same day. That's all they could talk about, it was the meme of the day, the silver bullet. So they sent team after team to learn that new mystical language.

    During the .com boom companies were so starved for developers, they would pick people from the street -- literally, any person you talked to was a potential candidate -- give them a book and ask them to come Monday morning and start programming, in whatever language that company used. There was Java and there was Perl, and many other languages.

    Like Giles, I don't see any reason to blame these people. They were offered a job in whatever language/platform was the hot commodity, they were hired because someone though they're good enough (and during the .com boom, breathing was the only skillset you needed), and they went to work. Some used that opportunity to develop their skills, some didn't, and some ran away screaming from anything to do with code.

    The point is, once a language becomes a hot commodity ... well, it becomes a commodity. And just like you can't differentiate commodities, commodities cannot differentiate buyers or sellers. That's another way of saying, the only qualifying thing all these hot commodity developers have is that they applied for a job. They're your typical industry profile, half of which are below average.

    And the same will probably happen to Ruby, and it's neither good or bad. It's your choice whether you want to work on maintaining code by a company that hired grade-C developers, or work for a company that always hires the best. In my opinion, that comes before deciding which language to use.

    And I seriously, seriously recommend stepping away from the selection bias language debates going on in the blogsphere. Those are generally done by skilled developers, with a strong bias towards languages like Java, Ruby, Perl and anything MS sells. But just as it's missing these unskilled developers (who wouldn't bother reading developer blogs), it's also missing all the other languages in common use today.

    And Giles, 37Signals is not alone there, I know (and work for) companies that were doing it for years. My hiring metric is the quality of software you can develop, the way you solve problems, and how you work with other people. If you only work on closed source code, then all you can do is tell me about it. If you worked with open source, I can judge by myself. And always, actions speak louder than words.

  12. I just want to say, keep it chill. "Don't give me this line" isn't necessary; intelligent people can disagree.

    Okay, will do. :)

    Also, I have to tell you, Mark, there are definitely unskilled programmers who work in Perl. I've seen the results.

    Ah, but the question in my mind is did they manage to program successfully in it? Did they produce programs/processes that worked? I can see your point (and assaf's) that the main reason deterioration in a language community happens is because of widespread corporate adoption, and this is linked to their hiring practices. This also depends on the pay scale, however. In the last several years (though this may be changing as I speak), the pay rate for developers hasn't been that great compared to what it was in the '90s.

    My point was that in Java (and in VB/.Net for that matter) even an unskilled developer can manage to create something that will run. It may not be maintainable, or run efficiently, but it'll run, because there are enough helpful cues, boundaries, and tools that can find the bugs, point them out, and help correct them. In the past, like with BASIC from the 70s and 80s, the language was easy to learn and use. Another blogger who's complained about this in the past is Justin James over at TechRepublic. Unlike me, he blames the programmers, who he calls "shake-n-bake". All he wishes for is that programmers would have a higher level of education so they could contemplate the consequences of their actions.

    This is a bad analogy, but what I'm saying is kind of like in video games. There are some games that lead you down a path, with some options. They don't allow you to go outside those boundaries though. They don't allow one to explore. Whereas there are MMORPGs that allow you to go explore in all sorts of places. It's a matter of style. Some like the "follow the path" games. Others prefer the "allow me to explore" format better. It's a matter of taste. I would think if you're really an adventure game afficionado, you'd tend to like the ones where you can explore.

    I like having few boundaries in a programming language, with a lot of "levers" I can use to get what I want done. I like spending time describing the process in code, where the logic gets the opportunity to "shine through". I don't like having to wrestle with the way the language wants me to describe the process. The amount of code I have to write matters to me. From what I've seen so far, Lisp, Ruby, and Smalltalk fit that bill. Surely others would as well.

    The problem might even be intrinsic to popular languages.

    I wouldn't call it "intrinsic" to popular languages. I'd say it's intrinsic to languages that hold your hand and limit your options, but even this might not be the culprit. I've quoted Alan Kay before on this, but what the heck. Why not have another go? (from the Jan. 2005 ACM Queue):

    "If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves."

    There's a collective mindset supported by the IT and programming community, along with products supported by major vendors like Microsoft, Sun, and IBM that says this "pyramid building technique" is the way to do things. The real measure, IMO, is how productive programmers become. Using the more powerful languages is akin to using a more advanced building technique. You can build taller, more robust structures with less building material, and fewer people. It requires a higher level of education, though.

    Understanding how to build an arch, or any of its variants (like a dome, which, if you can imagine it, is an arch swept 180 degrees around its midpoint) is more challenging than how to pile up a bunch of blocks. Not to say "piling blocks" is a totally uneducated approach. The Egyptians used mathematics to build their structures. For the most part, though, their system relied heavily on bureaucratic management techniques, and a structure where the vast majority of the workers were just "block makers" or "cave carvers", with a small number of highly skilled craftsmen.

    I've looked at job statistics in the past, including for Perl. Last I checked the most popular languages were C++, Java, and VB (I guess VB 6). The next runner up was .Net. Perl is still being used, but it was about as popular as PHP and Python, which compared to the other languages wasn't great. Ruby job numbers were very low (numbering in the hundreds of jobs, nationwide), but it has a bullet. The common charactistic of all of the popular languages is they are strongly scoped, compiled, relatively weak in terms of language features, and they have a lot of tool support. Most of them are strongly typed (VB could fit in this category as well, since it has a switch for strong typing). The ones that are the least popular are the open source dynamic languages.

    In fact I think the real problem you're identifying and Assaf's identifying too is that the hiring process for programmers often revolves around recent experience with language XYZ instead of more valuable things like craftsmanship, attention to detail, and a responsible attitude.

    You put your finger on something significant. It annoys me to no end to see these ads that look for "years of experience with X", no matter whether it's a language, HTML, DHTML, Javascript, AJAX, etc. It's a very old problem, too. My software engineering professor years ago joked about want ads in the 1980s asking for "4 years experience with Ada", when it had only been available for 2 years. I agree with your list of qualities. I think having years of experience as a qualification is good when you're talking about a medium. For example, I had to work with web app.-building technologies for several months before I felt like I had a handle on it. Before that I had worked only on command-line-based servers and utilities, and GUI apps. Learning to deal with the web took me a bit, but not 4 years for cripes sake.

    I'm not sure where this hiring mindset came from. I wonder if it's a holdover from the Industrial Age. Something I dearly wish for is for most companies to get more intelligent about their IT hiring practices. They push away well qualified applicants all the time.

    Cursing programmers who jump bandwagons to make money is tempting, but what if they're starving and they need to feed a family?

    I'm not cursing them. I was only using my friend's experience to make a counter point to what assaf said.

  13. When I started working with VB it didn't have much for type checking, code completion or any of the "save thy self" features we're talking about today. It wasn't considered the language of the Gods, but actually rediculed for creating -- sounds familiar? -- unskilled developers. Can you imagine a serious developer who can't even decide what type to use in advance and just vars all over the place? Skilled developers, on the other hand, knew all about performance. Skilled developers wrote articles in VB Journal teaching you to use types often to optimize your code.

    The .com boom had the clear two cast system. Java developers on one hand, and HTML coders on the other. After all, how can you take seriously anyone who programs in JavaScript, a laguage that lacks all the proper types? Or HTML with its non-strict enforcement and the fact that you can make up new markup elements at will?

    That's why I find the recent discussions to be a form of revisioning. A rewrite of history to prove a point. We've flipped our position on type system, and the only reason we're doing it is to prove Java leads to unskilled developers, when in fact Java just attracts your everyday developer, half of which, are worse than average.

    It's not without correlation. An army of median developers does lead a language to dumb itself down, and we can see that in the proliferation of bad frameworks, the inability to move towards functional programming and closures, bloated IDEs, XML-based everything and WS-Complexity. But we do have to draw the line between science and fiction.

  14. assaf:
    Can you imagine a serious developer who can't even decide what type to use in advance and just vars all over the place?

    I'm not quite sure what you mean here.

    The elegance of dynamic languages for me is that your intent comes through more clearly. When done right, the code can be read from the POV of someone who understands the process being implemented (like a business process), not the "processor" of the process. It's not cluttered with type specifiers all over the place. I get to describe the process in formal logic, and say to the processor "evaluate this and give me the result", instead of describing the process the computer goes through to execute my intended process.

    After all, how can you take seriously anyone who programs in JavaScript, a laguage that lacks all the proper types? Or HTML with its non-strict enforcement and the fact that you can make up new markup elements at will?

    I can take a Javascript programmer seriously. I have some respect for those who are doing AJAX work now without using a helper framework. That's complicated stuff they're doing. I think the AJAX way of doing things in Javascript is sadomasochistic and kludgy, but sure, I have some admiration for those folks, the same way I'd admire kernel hackers. I can admire them, but I wouldn't want to be doing what they're doing. I enjoy a bit of hacking now and then, but what I really like is elegant code.

    I've used VB a little, mostly the VB.Net variety. I did a little VB 6 several years ago.

    When I was growing up I programmed in the old style BASIC that had line numbers, GOTOs, and GOSUBs. It was a simple language to learn, but it lacked power. Of the language choices I had at the time, it was good enough for me. The only other choice I knew of was assembly language, and that wasn't too appetizing. You could certainly write something advanced in BASIC, but it would tend to run slowly, and it took a lot of work. It was really the only language in popular use that had English-looking commands. It had a little in the way of dynamic typing, but not much.

    The only thing I can think of that would match your description of VB being dynamically typed was when Microsoft added object types to it. At some point VB became an object-based language, though not object-oriented until VB.Net. Didn't this come around the time of Intellisense and such, or was it there from the beginning? VB also got the Variant type fairly early on, to deal with COM.

    When I programmed with VB.Net I always had "Option strict" on (and I think "Option explicit"), because that was recommended, and I had largely come to depend on strong typing at that time.

    In the days of old-style BASIC I just remember it being ridiculed for being an easy language to learn that didn't have much power. It was considered a beginners language--one with training wheels. Once you got good enough at it, you were "expected" to move on to something more advanced in order to gain respect and all that garbage. It was one-upsmanship. I don't remember the arguments about it creating unskilled programmers, though I'm sure that was strongly implied.

    There were some prominent folks in the computer science field who ridiculed BASIC. The one who's the most memorable to me is Dr. Dijkstra. I read some old speeches he gave in the 1970s. In a speech titled "How Do We Tell Truths That Might Hurt" (or titled differently, "An Inconvenient Truth" ;) ), he said of BASIC: "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration." He was very opinionated and didn't like most programming languages of the day. He had scornful things to say about Fortran, Cobol, and APL, too. I thought it was funny when I read it, but more in the vein of that scene in "Liar, Liar" where the main character gets brought into the boardroom by his sleazy boss, and he "roasts" everybody in the room to uproarious laughter.

    Dijkstra was of a different school than most CS professors. His philosophy was that a program is like a big formula. He insisted on taking a purely mathematical approach to it ("mathematical" in the sense of expressing symbol manipulation in a mathematical way). Most CS curricula taught programming as a series of steps being executed linearly, which is exemplified by the languages he complained about, with the exception of APL.

    This discussion has been good. I don't think that the language itself causes programmers to be unskilled. If that were true I'd still be programming in BASIC to this day, never even thinking to move on to something else out of my own desire. I agree with Dijkstra's philosophy towards CS, but disagree that languages like BASIC cause "mental mutilation beyond regeneration". I think it's possible for people with "prior exposure" to come to his way of thinking. It just takes a willingness to learn and change one's POV.

    I think most of all that the popular languages are inefficient for big tasks. Not in the sense of execution speed, but in the amount of code one has to write. I think the toolsets in particular that cover for this inefficiency coddle inexperienced programmers. They make the massive amounts of code manageable. One can easily be fooled into thinking that "my language is the best" because of the tool you're using. It's not really the language, but the tool, and possibly the API, that makes it seem nice. This tends to prevent people from really looking at what's going on and possibly looking elsewhere for better solutions. The need to make an income can do that, too. Giles is totally correct about that. I can say this because I've experienced both of these things.

    As I said earlier, I think that what makes the current norm possible is a certain mindset in the industry as a whole, which is dedicated to getting big things done, that "this is the way to do things." This pervades academia as well. What I and others are saying is it doesn't have to be this way. There are better options. What I guess I've been getting at this whole time is the economics of software development. The way we're doing it now is prone to failure, at an unacceptable level IMO, and is expensive and inefficient. It's been this way for at least a few decades, if not longer, yet it appears to me that we just keep doing more of the same. Some progress has been made, but it's very slow.

    My main concern with using more advanced languages has been driven by getting big, complex tasks done efficiently.

    Maybe some "better than thou" feeling has been coming through in what I've been saying. If so I apologize. That wasn't my intent.

  15. @Assaf -- Historical revisionism is definitely underway, nearly constantly, but especially right now with the whole thing started off by "Beyond Java." The funny thing about the caste system in programming is that scripting languages were definitely considered to be "beneath" Java in that system, and yet currently everybody's moving to scripting languages. It's true that the big move to Ruby comes from Rails, but the features that make Rails so easy to use come from Ruby -- ActiveRecord depends on metaprogramming, ActionView helpers depend on Ruby's fluid, graceful syntax, etc.

    I feel a whole new post coming on about the "X years of Y" problem.

  16. @Giles:

    I was intrigued by your comment about "Beyond Java". I didn't know what it was, so did a Google search, and came up with this:

    "A Glimpse At The Future of Programming Languages":

    Funny. I blogged about this a while back myself, though it was more specific to DSLs.

    Quoting from the article:

    "Tate looks at other languages and frameworks and asks hard questions: What do these languages do better? Could any of them become the next 'big new thing?' What will the tools of the future look like? Will they look like J2EE, or will they look like Ruby on Rails? Or, will they be even more radical, like continuation servers?"

    Seeing things like this get me excited, but given some discussions I've had, like the one here, my enthusiasm is dampened. It's difficult to underestimate many businesses who use technology, in the decisions they make, etc. Somehow given all the options, they tend to pick technologies that are just mediocre, but well supported/marketed. Call me a cynic. I hope I'm wrong.

  17. "I feel a whole new post coming on about the "X years of Y" problem."


  18. So, reading through the comments here (specifically the one about pyramid building) made me realize something, and it got brought even more to light by reading another post on this blog about Rails's SQL management being akin to garbage collection, and whether or not that will become an essential part of (web application) programming languages in the future.

    My realization is thus: every programmer, now matter how skilled they are, is programming by layers of bricks... it's just, as the skill of development evolves, developers are able to think of bigger and bigger things as "bricks". When you say you build things from the ground up in Ruby, you don't. What you actually do is take a huge amount of very useful C libraries that are linked into Ruby by default, most likely combined with (if you're doing web application building with Rails) some advanced plugins/gems that other people wrote, and those are your "bricks".

    This is what makes a truly great programming truly great... the ability to continue to increase the size of his "bricks". A lot of programmers, when the shift from C/C++ to Java started, got left behind, because they couldn't imagine garbage collection (among other things) as being really useful, because they already had the bricks to make garbage collection with. The thing is, garbage collection itself is a brick. Over time, the collective intelligence of the development community will slowly change what is considered to be the most basic brick, and we will have a new popular language.

    The reason I think Ruby/Rails will eventually end up where Java is today is that it is built from better bricks, and it is MUCH MUCH easier to build new bricks from it.

    Things are definitely starting to swing this way in the web community, as we see more and more "mashups". These are brick-building at a much higher level, by non programmers.

    As a matter of fact, this is absolutely not specific to programming or the web... this is the way EVERYTHING works. Everything advancement in every field is made one of two ways: by someone who takes what is currently "advanced" and makes it into bricks, thereby allowing you to take another step forward, or by someone who re-works the existing bricks to function better.

    Ruby and Rails are each examples of one of those things: Ruby is an example of the second; we have re-worked what we build bricks from to be more efficient and easier to build bricks with. Rails is exactly one of those more efficient, easier-built bricks. The really cool thing is that within Ruby, the community strongly encourages people to be aware of the fact that they are making bricks, and to share them. Hence the gigantic (and quickly increasing) supply of plugins for Rails, and all the amazing useful and wonderful libraries for Ruby in general.


Note: Only a member of this blog may post a comment.