[Tue Nov 30 08:42:17 CST 2004]

Wired has published an interesting article about SGI hobbyists. Sites like Nekochan, Silicon Bunny and SGI Zone sell SGI hardware and offer some form of community by means of bulletin boards, software tips, etc. As the author mentions, it is simply amazing to see old systems such as the Indy or the O2 still being used for something productive so many years after they came out. Yes, they are slow. Yes, it is not as if one can run many useful everyday apps on them, but they can still be used as multimedia workstations and sport a wonderful design. One only needs to open any of those systems and take a look inside the box to notice this. Hats off to the engineers who put together these systems. {link to this story}

[Tue Nov 23 15:47:43 CST 2004]

Computer World publishes that several European websites have been inadvertently spreading the Bofra worm to web users. The news is quite worrisome, and it clearly spells out what the future could look like if we continue putting all our eggs on the same technological basket. In this case, the exploit was spread by a firm that specializes in serving ads to major websites, and it preyed on systems running Windows XP with Service Pack 1 or older releases. On a related note, Jeff Duntemann recently wrote an excellent piece on software monoculture and why it happens no matter what Microsoft does. The picture he paints is, I think, quite bleak, and yet very sensible:

Software monoculture happens for a lot of reasons, only a few of them due to Microsoft's sales and marketing practices. In the home market, nontechnical people see safety in numbers: they want to be part of a crowd so that when something goes wrong, help will be nearby, among family, friends, or a local user group.

In corporate IT, monoculture happens because IT doesn't want to support diversity in a software ecosystem. Supporting multiple technologies costs way more than supporting only one, so IT prefers to pick a technology and force its use everywhere. Both of these issues are the result of free choices made for valid reasons. Monoculture is the result of genuine needs. Technological diversity may be good, but it costs, in dollars and in effort.

As if that weren't bad enough, there is another kind of software monoculture haunting us, far below the level of individual products —down, in fact, at the level of the bugs themselves.

If you give reports of recently discovered security holes in all major products (not merely Microsoft's) a very close read, you'll find a peculiar similarity in the bugs themselves. Most of them are "buffer overflow exploits", and these are almost entirely due to the shortcomings of a single programming language: C/C++.

{link to this story}

[Mon Nov 22 21:14:52 CST 2004]

This issue of the software patents is truly getting out of hand. There are too many people out there who choose to represent any sort of criticism as revolutionary dreams coming from an utopic minority, and the rants of Richard Stallman et alii certainly do not help. Still, the truth is that there is something rotten with the very concept (not to talk of the practice) of software patents, and one does not need to be a Communist to see it. Jim Rapoza writes in eWeek about a couple of really stupid software patents currently being used as an excuse to make some money off Amazon and Dell. For starters, a company called Cendant (I hope I got that link right) was awarded a patent back in 1997 for a "System and Method for Providing Recommendation of Goods or Services Based on Recorded Purchasing History". Dell, on the other hand, is facing patent claims from DE Technologies, which holds a "patent covering international transactions handled over the computer". Come on! I find it difficult to use a term other than leeches to describe these people. One can only hope that, as Rapoza suggests, these companies might have miscalculated when they decided to take on the big powerful corporations, and these may now use their power to convince the legislators to act. {link to this story}

[Mon Nov 22 21:07:25 CST 2004]

While reading an old issue of Linux Weekly News I come across a couple of truly remarkable quotes by Jeff Merkey, the first about his previous offer to purchase the whole of Linux's source code and the second about SCO's claim to big chunks of its code.

On a side note, the GPL buyout previously offered has been modified. We will be contacting individual contributors and negotiating with each copyright holder for the code we wish to convert on a case by case basis....

SCO has contacted us and identifed [sic] with precise detail and factual documentation the code and intellectual property in Linux they claim was taken from Unix. We have reviewed their claims and they appear to create enough uncertianty [sic] to warrant removal of the infringing portions.

and...

Yes, I can reveal them. All of XFS, All of JFS, and All of the SMP Support in Linux. I have no idea what the hell RCU is and when I find it, I'll remove it from the code.

As I said, remarkable, truly remarkable. This Merkey is definitely an interesting figure. I am pretty sure anyone following the Linux kernel mailing list has a special spot for the guy very close to his or her heart. {link to this story}

[Mon Nov 22 11:16:38 CST 2004]

This is the true power of open source. When you allow as much freedom as possible, people will create and innovate at a break-neck pace. As as consequence, we all benefit from it. It is no different than democracy or free market in that respect. I just read that a few hackers out there ported v9fs (the default filesystem for the Plan9 operating system to Linux and BSD. So, who cares, right? Well, nobody pretends this is something you should be running in production, but from a purely experimental or scientific point of view the Plan9 guys are working on some exciting things: an operating system where everything truly is a file, a distributed filesystem based on a client-server approach, a system that is network-centric from the ground up and the transition from local to remote system is completely seamless... As I said, if you are a system administrator there is no need to rush and install this patch on your production server. However, if you are interested in the world of operating systems you should give it a try. Let us not forget that what these guys are working on today may be what we all run tomorrow. {link to this story}

[Mon Nov 22 11:06:06 CST 2004]

Bad news for the future of UNIX on the server business. Internet News reports that UNIX continues to lose ground to both Windows and Linux, and it does not appear as if that trend will be reversed anytime soon.

Unix is losing so much ground that IT research firm IDC is predicting Windows will take over as the dominant server operating system by the year 2008.

(...)

Unix servers account for $4.2 billion of total sales in a server market that reached $11.5 billion in the second quarter of 2004. They should end the year with 39.6 percent market share of a $53.3 billion server market. Windows makes up the second largest percentage of server operating systems with 32.2 percent market share.

Linux owns 8.4 percent market share behind IBM's mainframe OS/390, which owns 10.6 percent. That is expected to change in the next four years, though. IDC expects Windows to command a 38.4 percent market share over Unix systems, which is at 31.9 percent. IDC also predicts that Linux will jump to third place with 14.9 percent market share.

As it could be expected, price is a key issue. If we add to that the fact that the ix86 architecture is definitely maturing together with the operating systems that run on it (Windows, Linux, BSDs...), I would not say the Sun's future looks so bright unless they react soon. {link to this story}

[Fri Nov 19 08:57:17 CST 2004]

NewsForge published an article on The myth of stability that goes to show you how extremely uninformed technology media can be sometimes. The main point of the author is that stability is seriously overrated in today's technology industry and waiting until a given piece of hardware of software has proven itself is not necessarily the best approach since one's competitors may be benefitting from it in the meantime. It is difficult to see where to start with an article like this. First of all, the author is assuming that a new product will always be better and provide a competitive advantage to its users, which is far from clear. No matter what he says, there are indeed bugs to iron out in almost every product that is released to the public, and in quite a few cases it is precisely those first buyers who play the role of beta testers for the manufacturer. Sorry, it is a reality. But, even more important, the author simply does not appear to be able to grasp the difference between stability and reliability. These are two different concepts.

The perception among pundits and dime-store critics is that the only true indication of stability is time. Apparently the idea is that the longer a program, device, or component has been around, the more stable it is, assuming no problems arise. The trouble with this philosophy is that hardware and software infrastructure is constantly changing. If you have a piece of equipment or software that is unanimously declared "stable," the whole situation changes when new products are released -- you don't know if the supposedly stable product will maintain its vaunted stability when forced to deal with new variables such as a different video card, operating system, or even a different userland program. (...)

The problems brought about by older products causing trouble with newer ones is best shown with operating systems. Debian's "stable" release, Woody, uses an older Linux kernel that doesn't work with a rapidly growing list of hardware components that have been brought to market after the 2.4 kernel was released, such as serial ATA hard drives. Even commercial distributions such as Sun's Java Desktop System have trouble with commonly used hardware because of the age of the kernel. (...)

The Debian project seems to be the epitome of the misguided belief that old is equivalent to stable. Packages in the Debian "stable" or even the "testing" and "unstable" branches can be months or years old. At the other end of the spectrum, Novell releases a new edition of SUSE Linux Professional for three architectures approximately twice per year, and each includes more than a thousand of the latest desktop software applications. SUSE is a distribution made for production environments and home desktops alike, and in terms of reliability, SUSE is as stable as they come.

So, apparently SuSE Linux Professional is a very stable distribution in spite of the fact that, as the author himself acknowledges, it is released several times a year and it changes hundreds of packages. Want some evidence? Well, it does not crash much. As I indicated above, the author is quite lost. A system can run for months on end without a single glitch or crash, and still be quite unstable if upgrading to new packages in order to fix security vulnerabilities (just to give an example) also introduces changes in their behavior. The obvious conclusion after reading the article is that its author has no idea whatsoever how things work in what he calls the enterprise. When a given company has deployed hundreds or thousands of systems in its network, it has spent months testing its key applications to make sure they do run without a problem on a platform and its employees spent hours writing scripts and customizing the installations, the last thing they want is to see all that go out the window simply because upgrading to the latest and greates breaks the compatibility with older releases. He mentions SuSE Linux Professional, but fails to mention that the very same company also offers SuSE Linux Enterprise Server, which has a much slower release cycle and never fixes a security problem by upgrading to the latest version of a given application but rather by backporting the security patch. Now, why does he think this is the case? {link to this story}

[Wed Nov 17 19:15:07 CST 2004]

Today, during a conversation with one of my co-workers, I was asked why the Linux community, and Linus Torvalds in particular, oppose the idea of including a powerful core dump analysis tool (such as LKCD) or a kernel debugger (such as KDB or KGDB) in the Linux kernel. It would definitely help troubleshoot and debug problems with the code, so it is difficult to understand why anyone would dare to oppose the idea. However, as it tends to be the case, things are not so clear-cut, and Linus is definitely no moron. He has his reasons, and although one may agree or disagree with him the truth is that the arguments he uses are something to consider. So, what are the reasons stopping the kernel maintainers from merging a kernel debugger into the main tree? Let us see. First of all, one has to remember that the Linux kernel runs on many different architectures, and the maintainers do a fairly decent job of keeping them all more or less in synch and in a functional state. The kernel debugging tools, on the other hand, are to the best of my knowledge architecture dependent and do not always run smoothly (if at all) under a different type of processor. Needless to say, this is something to bear in mind when making a final decision on the topic, and my feeling is that most project managers would need to hear no more in order to strike it down as an unfeasible proposal. Second, Linus himself has explained several times that he is quite afraid many developers might use these tools as a nice excuse for laziness. What he means is that some people might imbue these tools with some sort of quasi-magic halo and will not spend nearly enough time doing what they should be doing if they truly want to get to know the kernel: study the code. Yes, one thing does not necessarily preclude the other, but working as I do for a company whose main operating system has benefitted from the existence of a mature core dump analysis tool for a long time now, I can say that Linus' fears may not be totally unfounded. Even if the source code is available to the people analyzing the dumps, they still tend to put all the emphasis on the tool itself and, in the end, know less about the actual code than they should. This is a reality. Third, we should remember the Heisenberg effect (many developers comically refer to it as the Heisenbug effect): in other words, the very act of observing the event and using certain tools to measure it and analyze it can manipulate and distort the event itself. To make matters worse, it is quite difficult in these cases to tell when something is directly caused by a bug in the kernel code or by the distortion of the tools we use to perform our own analysis. Finally, and this is perhaps the most powerful argument, the Linux kernel is GPL'ed. Nothing prevents a given company or distributor from patching it with any of these tools and supporting it. For the most part, merging the code into the main source tree would certainly make life easier for the developers who maintain them, but that is about all we would accomplish. Anybody (and this includes Red Hat, SuSE or anybody else) can apply the patch and support the tools. As a matter of fact, SGI does precisely that), and Debian also has kernel patches available for all those users who want to use the tools. Said that, my guess is that once they mature and the field clears up a little bit (i.e., when the different projects converge towards just one or two), some form of kernel debug tool will be added to the main tree.

By the way, if you are interested in the topic and would like to read more about it, check out the discussion about KDB versus KGDB published on KernelTrap. It is worth the time. {link to this story}

[Fri Nov 12 21:01:06 CST 2004]

Dru Lavigne publishes a short but good article on OnLamp.com about the differences and similarities between Linux and BSD, covering a little bit of everything: runlevels, startup scripts, recompiling the kernel, installing new software, documentation... By the way, I have to agree with the author that, overall, the BSD man pages are much better (as in more complete and better written) than the ones that come with Linux. Yes, I know perfectly well that GNU info is the official documentation format in GNU/Linux, but that does not change the fact that most of us still tend to use the good old man pages far more often. Let us face it, they are much simpler to use. I definitely do not appear to be the only one who dislikes the info system. {link to this story}

[Fri Nov 12 20:48:43 CST 2004]

If, like me, you subscribe to many mailing lists, you will notice that every now and then you will get two copies of a message when people reply to your emails. Yes, it is annoying. And yes, I did wonder many times why the mailing list administrators did not configure their lists to munge the Reply-To headers. Well, as it happens, someone in the ubuntu-users list wondered why they would not use the Reply-To headers to automatically route all replies to the list and the reply came prety quick. Someone out there took the time to put a very thoughtful piece on why munging the Reply-To headers is a bad idea, and I must acknowledge that the author managed to change my views on the topic. The summary at the end of the article does a great job at summing up his points:

Many people want to munge Reply-To headers. They believe it makes reply-to-list easier, and it encourages more list traffic. It really does neither, and is a very poor idea. Reply-To munging suffers from the following problems:

  • It violates the principle of minimal munging.
  • It provides no benefit to the user of a reasonable mailer.
  • It limits a subscriber's freedom to choose how he or she will direct a response.
  • It actually reduces functionality for the user of a reasonable mailer.
  • It removes important information, which can make it impossible to get back to the message sender.
  • It penalizes the person with a reasonable mailer in order to coddle those running brain-dead software.
  • It violates the principle of least work because complicates the procedure for replying to messages.
  • It violates the principle of least surprise because it changes the way a mailer works.
  • It violates the principle of least damage, and it encourages a failure mode that can be extremely embarrassing -- or worse.
  • Your subscribers don't want you to do it. Or, at least the ones who have bothered to read the docs for their mailer don't want you to do it.
{link to this story}

[Thu Nov 11 16:07:24 CST 2004]

While perusing the entries on the Planet Debian website, I came across Brian Bassett's entry on the problems of writing Windows shell scripts. I am sorry, but whoever had a chance to write one such script under Unix or Linux and then had to accomplish something similar under Windows can tell you that they definitely need to improve here. Microsoft is light-years behind the times when it comes to allowing for easy shell scripting, and nobody will convince me that this is not so important when every single shop I have worked at had such scripts to be run on Windows workstations at logon. I have been through the pain of attempting to configure a Windows XP machine to automatically backup files every night to a home server, and it is something I would not recommend to my worst enemy. The platform simply lacks flexibility to do the job, and one can easily tell the operating system itself was not designed with the network in mind. {link to this story}

[Thu Nov 11 15:55:14 CST 2004]

This is hacking at its best. Ever since it started selling its set-top boxes five years ago, TiVo has become a great success. So much so that TiVo hackers have grown all over the place. What is it that they are coming up with? Well, a whole new array of unofficial features that you can download and add to your own box at home:

TiVo hacks available for download do everything from adding a Web interface to the TiVo unit, converting programs to DVD and other formats, altering TiVo native features, expanding the unit's hard drive, transferring files back and forth from the unit to the PC, and archiving shows at smaller file sizes.
Perhaps the most interesting feature is mfs_ftp, which allows for easy moving of files from the TiVo box to any PC. As you can imagine, it has the potential to be used as an online broadcast service, which would most likely anger quite a few people out there. Who said open source is not creative? {link to this story}

[Mon Nov 8 12:52:55 CST 2004]

Just in case anyone doubted the maxim that in the world of technology if something is possible someone out there will try it whether it makes sense or not, apparently, a sysadmin has been badly bitten by his crazy idea of setting up the mail spooler on a ramdisk. It amazes me that a sysadmin might even consider the idea of doing something like this in order to obtain some performance gains, while completely oblivious to the implications of such measure. Well, the system crashed and... the customers emails that were in the spooler at the time are all gone! Man, that hurts! {link to this story}