[Wed Aug 25 12:04:04 CEST 2010]

Wired publishes an interview with Fred Brooks, well known for his classic book The Mythical Man-Month. He has just published another book, The Design of Design, where he muses about the concept of design across several fields of endeavor. Some of Fred Brooks' comments during the interview are well worth to be considered carefully:

— Did you ever expect it [The Mythical Man-Month] to be read by non-programmers?

— No, and I've been surprised that people still find it relevant 35 years later. That means we still have the same problems.

— What do you consider your greatest technological achievement?

— The most important single decision I ever made was to change the IBM 360 series from a 6-bit byte to an 8-bit byte, thereby enabling the use of lowercase letters. That change propagated everywhere.

— You say that the Job Control Language you developed for the IBM 360 OS was "the worst computer programming language ever devised by anybody, anywhere." Have you always been so frank with yourself?

You can learn more from failure than success. In failure you're forced to find out what part did not work. But in success you can believe everything you did was great, when in fact some parts may not have worked at all. Failure forces you to face reality.

— In your experience, what's the best process for design?

Great design does not come from great processes; it comes from great designers.

— But surely The Design of Design is about creating better processes for great designers?

The critical thing about the design process is to identify your scarcest resource. Despite what you may thing, that very often is not money. For example, in a NASA moon shot, money is abundant but lightness is scarce; every ounce of weight requires tons of material below. On the design of a beach vacation home, the limitation may be your ocean-front footage. You have to make sure your whole team understands what scarce resource you're optimizing.

— How has your thinking about design changed over the past decades?

When I first wrote The Mythical Man-Month in 1975, I counseled programmers to "throw the first version away", then build a second one. By the 20th-anniversary edition, I realized that constant incremental iteration is a far sounder approach. You build a quick prototype and get it in front of the users to see what they with it. You will always be surprised.

I'd like to emphasize that this is precisely how open source software proceeds: write something and get it out quickly, even if it's just a prototype, then improve on it depending on user feedback. The end product will be much better than anythign designed from the top-down with the old "I know best" mentality.

I also liked the departing thoughts:

— In the past few decades, we've seen remarkable performance improvements in most technologies —but not in software. Why is software the exception?

Software is not not the exception; hardware is the exception. No technology in history has had the kind of rapid cost/performance gains that computer hardware has enjoyed. Progress in software is more like progress in automobiles or airplanes: We see steady gains, but they're incremental.

— You've been involved in software for over 50 years. Can you imagine what software will be like 50 years from now?

— Nope. All my past predictions have been, shall we say, short-sighted. For instance, I predicted that every member of a team should be able to seee the code of every other member, but it turns out that encapsulation works much better.

— Do you have any advice for young industrial designers and software architects?

— Design, design, and design; and seek knowledgeable criticism.

Chapeau! There is a true master. That's why his book is still read so many years after it was published. Sure, the reality of day-to-day computing has changed quite a bit, but the core issues that he addressed in The Mythical Man-Month are still there. {link to this story}

[Tue Aug 24 12:43:43 CEST 2010]

You can see certain things coming. Oracle killing OpenSolaris is one of them. Sun Microsystems, in a truly desperate move, decided to release the source code to its Solaris OS back in 2005. By then, it was patently obvious that commercial UNIX was dead, but Scott McNealy and others refused to see it. The reality is that both Windows and Linux on the workstation and, especially, Linux on the server were killing it slowly but surely. Yes, UNIX was still more stable than both OSes, but they were quickly catching up, had the advantage of inertia, many applications were written for them and, above all, had a lively and dynamic community. UNIX had none of that anymore. Truth be told, the business interests of the major UNIX vendors killed the OS back in the 1990s. As I said, by the time Sun decided to opensource its crown jewel, it was too late. Their end was written on the wall. Yes, Solaris would be the last commercial UNIX to survive (yes, both AIX and HP-UX are still around, but they have always been reduced to a very small niche market and could never be considered large enough to matter), but it would fall nevertheless. The Windows and Linux juggernaut (and, above all, the Intel and PC wave) was too mighty.

Now, when OpenSolaris was released there was a good amount of hype, of course. There was even talk of killing the Linux machine. They were that cocky back then. Yet, it truly didn't take much to realize that it was all part of the dreams of a company that had been. Sun, "the dot in dot.com", somehow figured that there was a way to turn things aroud and fight their way back to the top, but it wasn't possible anymore. Simply put, they had reacted too late. When Oracle bought Sun, everybody knew that it was written on the wall. Larry Ellison never cared much for open source, except for using it in order to lower costs, of course. To him, it's one thing selling his database to customers running open source software (the key word there is "selling"), and a different thing to pay for the costs of development of something that people can get for free. That's not Larry Ellison's style. Who is surprised, then, that he axed the project? My guess is that the community will pick up where Oracle left it, but it will turn out to be something far less dynamic than BSD and, for sure, barely a competitor to Linux, which is where the action still is. {link to this story}

[Fri Aug 13 20:05:09 CEST 2010]

Today I ran into one of those real-life cases where choosing an open source technology proved to be the right decision. Mind you, it is nothing earth-shattering. It's something as simple as it can be. However, simple as it is, it made my day. As I've written in these pages before, I run mutt as an email client. I also have it configured to automatically show every single email containing a particular string in the subject line in a different color that makes it stand out in my inbox. It's all very simple, of course. It's something you can accomplish with pretty much any email client out there. The problem started when one of the people who sent me an email inadvertently changed the original subject line and introduced a typo. From that moment on, every single email in the thread showed up in normal color and didn't stand out. So, what was I to do? Create a new filter for the subject line with the typo? Why bother? It was as easy as exiting the email client, opening the mailbox in my favorite editor —it's vim, for the record—, search for the string with the typo and replace it with the correct one. Notice that this only worked because the mutt email client uses a non-proprietary standard to store the messages and, on top of that, adheres to the old UNIX philosophy of using plain text as much as possible. Why do this? Because UNIX hacks are old-fashioned grumpy jerks? No, because it allows you more flexibility and freedom. Not only could I easily solve the problem at hand, but I could also use my favorite editor and, should I need to, I could also script it. {link to this story}

[Fri Aug 6 13:44:42 CEST 2010]

Inside HPC publishes an interesting interview with SGI CTO, Eng Lim Goh (DISCLAIMER: I work for SGI, but it truly is an experience to hear or read what Eng Lim Goh has to say about the topic of high-performance computing, since he is so obviously passionate about it). In any case, aside from a few musings about the future of computing and SGI's latest hardware product (the Altix UV), he discusses the concept of a global address space as something distinct from shared memory or distributed memory:

We sat down and worked out that to cut down that overhead, we need a global address space. With this, memory in every node in the Exascale system is aware (through the node controller) of every other memory in the entire infrastructure. So that when you send a message, a synchronization, or GET PUT to do communications, you do it with little overhead.

But I must emphasize, as many even well-informed HPC people misunderstand, that this global address space is not shared memory. This is the other part of Altix UV that has not been understood weel. Let me therefore lay it out.

At the highest level you have shared memory. In the next level down you have global address space and next level down you have distributed memory. Distributed memory is what we all know; each node doesn't know about its neighboring nodes and what you have to do is send a message across. That's why it's called Message Passing.

Shared memory then is all the way up. Every node sees all the memory in every other node and hears all the chatter in every other node. Whether it needs it or not, it will see everything and hear everything. That's why the Linux or Windows can just come in and use the big node.

However, with all the goodness of big shared memory hearing and seeing everything brings you, it cannot scale to a billion threads. It's just like if you were in a crowded room and tried to pay attention to all the chatter at once even though it is not meant for you. You would get highly distracted.

So if you go to the other extreme to a distributed memory, you stil in a house with sound-proff the walls and shutter the windows. And as such you see nothing and you hear nothing of your neighbors. The only way you can get a communication across is to send a message by writing a letter or email and send it to a neighbor.

So we decided that a global address space is the best middle ground. In that analogy, global address space sees everything, but does not hear the chattering amongst neighbors. All it wants to do is see everything so that it can do a GET PUT directly, do a SEND RECEIVE directly, or it can do a synchronization expediently. So a hardware-supported, global address space is one way to get the communications overhead lowered in the Exascale world. And this is especially important when you're talking about a billion threads. Imagine trying to do a global sum on a billion threads. I hope we can code around it, but my suspicion is that there will still be applications needing to do it.

It sounds quite interesting and sensible to me. Going for the middle ground almost always proves to be a winner. {link to this story}

[Wed Aug 4 15:35:02 CEST 2010]

Everyone is entitled to his or her opinion, of course. Also, it is true that the Google Chrome browser has been progressing by leaps and bounds in the last few months, to the point that it can easily be considered better than good old Firefox. However, I take issue with a piece by Ken Hess titled Firefox Falls Further Behind in Browser Wars that has been published by PC World. Simply put, regardless of what you think of Chrome and Firefox, the article is quite misleading and one wonders whether the author even knows what he is talking about. For example:

Firefox fans tout the browser's use of extensions, or add-ons, as one of its many boastworthy features, but if you've ever connected to a site that uses some new Web feature that Firefox doesn't support, you're out of luck. Those same extensions often break other extensions on the way during installation.

Further, why should a user constantly download and install extensions for such common Web gadgetry as Flash or PDF? Why aren't those extensions included by default if their inclusion is necessary for a rich web experience?

What the...? What is he talking about? You don't need extensions or add-ons to view Flash pages or PDF documents. That has absolutely nothing to do with Firefox itself. The Linux distribution he is running may or may not install the plugins by default, but that is something that cannot be blamed on the browser. Now, if one were to read something like this on a local daily newspaper, that would be on thing. But this is a computer magazine, for crying out loud! {link to this story}