[Wed Mar 30 15:47:19 CST 2005]

Those who have been running Linux for a while will remember the Progeny folks. Not so long ago, they attempted to make some money by developing, selling and supporting a distribution based on Debian (sort of like what the Ubuntu guys are trying to do now), but it did not work in the end. However, they somehow managed to transform themselves and sell other services, including customized security updates for older releases of Red Hat Linux whose users had been left out in the cold by the company's decision to pull out of the low-end. Well, now NewsForge publishes a conversation with Ian Murdock, founder of Progeny Systems, about how they managed to survive the dot-com crash. It makes an interesting read as well as a great chance to learn from Murdock's mistakes. {link to this story}

[Wed Mar 30 15:13:07 CST 2005]

Linux.com recently published a story with a few tips on how to act when your system has been compromised. Nothing earth-shattering, but it serves as a reminder of what to do in such circumstances: immediately unplug the system from the network, perform some forensics and reinstall. By the way, under the Cleaning the system section the author talks about running the chkrootkit application but that cleans nothing on the system. It is only useful to detect if the system has been compromised and perhaps what type of rootkit was used to break into the machine, but that is about it. It does not clean anything. His advise to run the command daily as a cron job does make sense though, and it is something I have been doing for years now. One more thing I did not like about the article: when your system has been compromised you should reinstall the whole OS, and not just waste your time reinstalling a few packages that you think were altered by the intruder, leaving the rest of the system alone. In reality, you can never be absolutely positive what is it that the attacker did on the system so do not waste your time and reinstall the complete OS. It will be worth it in the end. {link to this story}

[Sat Mar 26 14:16:19 CST 2005]

Apparently, Peter Chubb has submitted a patch to the Linux kernel list whos main objective is to move the drivers out of ther kernel and into user space. The idea is that this way faults in the drivers would not cause any sort of kernel instability that would manage to bring the system down. Additionally, drivers could be easily installed and upgraded, avoiding the current mess that forces some users to recompile the kernel in order to apply a patch or enable a given driver that is not enabled by default. Chubb's latest patch implements user space interrupts, although apparently there is an alternative implementation called User Level Interrupt (ULI), written by SGI (its main limitation is that it only applies to the IA-64 architecture in its current form). In any case, these are all some interesting discussions that could make the Linux kernel easier to use and might give it a needed push on the desktop. {link to this story}

[Sat Mar 26 14:04:51 CST 2005]

Ladislav Bodnar tells us in Linux Weekly News about his nightmare experience with Fedora Core 4 Test 1. Sure, it is a pre-release, but what Bodnar describes sounds more like a very early alpha version of a product:

If we still had any doubts about just how experimental this test release was, they were quickly gone as soon as we completed the installation and rebooted the system. First, we noticed a high number of Python-related errors during the boot. Then, instead of the usual configuration dialog ("firstboot"), we were dropped straight into a GDM login screen (at 800x600 pixel resolution), with the only available account being the root account created earlier. Those Python errors came to haunt us soon afterward, as we were unable to launch many applications (included most of Red Hat's configuration dialogs) and could not connect to Red Hat Networks to check for updates. Evolution crashed during account configuration and OpenOffice.org wouldn't start at all. To add insult to injury, opening Firefox greeted us with: "There ought to be release notes for Fedora Core 3.90 here, but there aren't. In the meantime, we bring you this ASCII art hat."

As I wrote somewhere else, it has also been my experience that Fedora releases are not quite as polished as one might think. I fully understand there are lots of rave reviews out there, but one has the feeling that most reviewers simply install the distribution, reboot the system, check if there is any network connectivity, run a few apps and sit down to write their article. I am just not so sure Fedora is all that usable for day to day work, especially when compared to other excellent distros such as Ubuntu, and this is not something I am saying out of dislike for Red Hat, which I do not have. As a matter of fact, I admire the company and what they have done for Linux. However, that does not change my highly skeptical view of Fedora, which I know is shared by a few friends who have also run into too many problems with this community distro. In view of all this, one wonders if perhaps all the troubles experienced by the Debian folks are just unavoidable for a popular Linux distribution run by a faceless community. For the time being, Fedora has some serious corporate backing, and it does not appear to be doing much better than Debian, to be honest. Actually, the more their product differs from the original Red Hat Linux the more unstable and prone to last minute changes it seems. {link to this story}

[Thu Mar 24 13:28:06 CST 2005]

DevSource took the time to ask a few key developers about the present and future of scripting languages, and the result is worth a read. Damin Conway, of Perl fame, came up with a really good summary of scripting's strengths:

Firstly, they provide more sophisticated programming tools and support for more advanced programming techniques. For example, Perl provides hashed look-up tables and arbitrary-length arrays as core data types. C doesn't even have a proper string type. Likewise, Perl's data sorting facilities are integrated into the language, so the sorting criteria are directly programmable.

Having all the basic tools of programming (i.e. high-level data types and common algorithms) built into the language, rather than having to build them yourself, means that you need to write less code to solve a given problem. Scripting languages let you use power-tools and pre-fabricated panels to build your home, instead of having to build a stone axe to cut down the timbers to shore up the mine to extract the iron to forge the saw to mill the lumber to get the posts to sink the foundations on top of which you'll eventually construct your dwelling, once you've gone back to the mine to get more iron to cast an adze and some nails and a hammer, and used them to build a ladder and... well, you get the idea.

The second major attraction of interpreted languages is that they let you do incremental development very quickly, without the constant irritation of sitting around waiting for compilesto finish. Many Perl programmers —myself included— develop directly in their favorite text editor; write some code, hit a function key, and see it execute immediately. No frustrating pause, no break in your concentration. That "immediacy" and "seamlessness" is a huge benefit, and often overlooked.

Incidentally, the Five Things You Didn't Know You Could Do With Perl section includes a few interesting and usefult scripts that you can just copy, paste and run to perform such useful activities as archiving files and extracting information from PDF and MP3 files. {link to this story}

[Sat Mar 19 21:17:01 CST 2005]

eWeek Labs publishes an interesting article comparing Solaris 10 with Red Hat Enterprise Linux 4 that definitely sheds a more positive light on Sun's OS than on the Red Hat Linux distribution in far more than the price alone (yes, we have reached the point where Solaris 10 is free to download and install while RHEL costs a minimum of US $349, although it includes some basic support for the price). Basically, Solaris offers User and Process Rights Management that can be easily set up using both commands and a nice GUI, the concept of zones to isolate services and provide enhanced security (something similar to BSD's jail, predictive self-healing (in summary, a service manager framework that automatically restarts services that came down for whatever reason and is intelligent enough to also restart other services it might depend on) and, of course, the much talked about DTrace. On top of that, you can also run Linux binaries completely unchanged on Solaris 10 thanks to Janus and the Solaris community (yes, there is such a thing too) also contributes with Blastwave.org. Against that, Red Hat only has to offer a reworked 2.6 Linux kernel that provides better performance, a choice of disk I/O scheduler to suit your own needs (although the Red Hat documentation does not provide any clues as to which one to choose for a given workload), ExecShield and SELinux. However, these newly added security tools are not nearly as easy to configure as its Solaris counterparts, and they are not as widely tested either. So, altogether the comparison clearly favors Solaris rather than Red Hat, at least when it comes to features and overall integration of the tools. However, it would not be the first time that technical excellence ends up losing the war, and that seems to be precisely what so many UNIX vendors just do not seem to be able to grasp. They lost to Microsoft more than 10 years ago, and are still trying to figure out what happened. So, it should not surprise anyone if Linux ends up catching with Solaris too rather sooner than later. {link to this story}

[Sat Mar 19 21:02:39 CST 2005]

About a week ago, Information Week published a story about five business executives who decided to adopt blogging as an internal tool in their companies. Like browsing and email before, blogging is one of those technologies that many business people out there often reject in a typical kneejerk reaction. In reality, it can be applied to improve communications between the different departments within a company which tends to be precisely one of the main contributors to employee dissastisfaction, infighting and other inefficiencies. I have the feeling that sooner or later most executives will just learn to accept it as something more than a potential time sink, which is admittedly one of its risks. {link to this story}

[Sat Mar 19 15:39:29 CST 2005]

eWeek tells us about the not so well known world of open source software in Windows, which includes such important applications as Firefox, Thunderbird or Open Office, of course, but also lesser known projects such as Putty (the SSH client), 7-Zip, WinSCP or the Gaim plug-in for encrypted conversations. Even less known is the fact that it is possible to run GTK and QT applications in Windows, including the whole KDE desktop environment. Those who would like to test open source software tools in Windows might want to check on Cygwin or The OpenCD Project. Additionally, the OSSWin Project maintains an extensive list of open source applications that run on Windows. Incidentally, one of my favorite websites for Windows freeware (mind you, this is not the same as open source software) has always been the Freeware Home. Download and enjoy! {link to this story}

[Sat Mar 19 15:21:51 CST 2005]

Not sure what to think of the progress of the Intel's Itanium architecture so far. Several years after it was released, one can definitely say it has not seen the success other Intel processors had in the past. On top of that, it is pretty clear by now that AMD is posing a serious threat to Intel's market in the lower end: home desktop, mobile devices, non-specialized workstations and even servers for what has come to be termed the infrastructure (web, mail, DNS...). To make Intel's troubles even worse, it has become clear by now that it is precisely the smaller vendors that are betting their future on Itanium. We are talking about the likes of NEC, Fujitsu, SGI and Unisys. Yes, HP is also there, at least for the time being (who knows what will happen after a new CEO takes over Fiorina's office?), but one misses other rather well known names, such as IBM or Dell. None of those are clearly betting on Itanium's future, which could only mean two things for the smaller vendors mentioned above: either they are about to lose whatever it is that they have left in a senseless gamble or Itanium will be their savior. It is eay too early to tell, of course, but I do have the feeling that Itanium will be a winner at least in certain highly specialized markets where performance is a central need, even if it is at a premium price. I am thinking about the high-performance and scientific market as well as perhaps the visualization market. In this sense, Itanium appears as the perfect partner to companies like SGI or NEC. On the other hand, Itanium's gains will inevitably come at a cost for other vendors, but who? I would say it is IBM and Sun who have the most to lose as the Itanium architecture expands in these markets. I am also convinced this should worry Sun more than IBM, for Big Blue has diversified its sources of revenue in the past few years and makes a lot of money from its services division, while Sun's bread and butter is still in its high-end servers. And the low-end market? My feeling is that there will be two players there: Intel and AMD. {link to this story}

[Fri Mar 18 14:57:50 CST 2005]

All those who think that people's dreams of Microsoft to compete is just bologna may do well in checking the latest leaks on Internet Explorer 7.0 published by the Microsoft Watch guys. All it took was a few articles in the trade magazines praising Firefox as an efficient and secure solution for most companies out there to have the Microsoft folks doing some work on their old browser again. Of course, the features they are talking about are definitely not going to impress anyone: tabbed browsing, an embedded RSS aggregator, transparent PNG... By the way, it does not look as if this new version of MSIE will support CSS 2.0, which means that web developers all over the world will continue struggling to write web code that is completely portable. {link to this story}

[Fri Mar 18 13:06:59 CST 2005]

Molly Wood published a story yesterday about what she sees as Google's clear moves towards building a web-centric operating environment that could pose a threat to Microsoft, noting, for example, the hiring of some key architects of the Windows OS, such as Marc Lucovsky, as well as the acquisition of companies such as Picasa or Keyhole, not to talk about the release of products such as their Google Deskbar. Well, Lloyd Dalton published some quick notes in his blog today explaining why these conspiracy theories simply make no sense and Google's moves can be explained without resorting to ghosts and legends. That is no reason though to disagree with Wood's initial assessment that sooner or later it may be possible to take the network-centric approach and forget about the operating system. I would also agree that the day that happens the Web will most likely be at the center of it. My feeling is that we still run many application using the fat client approach mainly out of habit. As a matter of fact, it would not be the first time I see an application that has been implemented using that approach when it would have made much more sense to make it a web application. {link to this story}

[Fri Mar 18 12:46:31 CST 2005]

One more example of the stupidity of software patents. Groklaw recently reported that Microsoft registered US Patent Application #20040210818 for what amounts to little else than storing data from a word processor in XML. The title is "Word-processing document stored in a single XML file that may be manipulated by applications that understand XML", and defined in the abstract as:

A word processor including a native XML file format is provided. The well formed XML file fully represents the word-processor document, and fully supports 100% of word-processor's rich formatting. There are no feature losses when saving the word-processor documents as XML. A published XSD file defines all the rules behind the word-processor's XML file format. Hints may be provided within the XML associated files providing applications that understand XML a shortcut to understanding some of the features provided by the word-processor. The word-processing document is stored in a single XML file. Additionally, manipulation of word-processing documents may be done on computing devices that do not include the word-processor itself.

In other words, the same thing AbiWord and many other applications have been doing for a few years now. Honest, if both the US and the EU continue to go down this path we should expect nothing but trouble. {link to this story}

[Wed Mar 16 20:48:33 CST 2005]

About a week ago, Eugenia Loli-Queru published an article where she criticized the hobbysts' approach taken by open source software to development in general, and by now I think it is safe to assume she never expected such a big brouhaha would erupt over her words. Actually, it has been so controversial that Loli-Queru herself published yesterday another article with some clarifications. Yet, anybody who has been following (and using) the world of open source software for a while cannot deny that her main argument has merit:

To make it clear: I am not against Open Source Software, in fact, I am for it. But I am increasingly frustrated with Open Source software written by hobbyists; hobbyists who write a specific application or library because they need a specific function out of their applications, for their own needs and only their own needs.

We can whine and moan all we want, but the fact remains that Loli-Queru has a point. She can apologize all she wants for her confrontational style, and perhaps truly wishes she had expressed herself in different terms. Still, that does change the fact that she is right. Of all the opinions I have read surrounding this controversy, an article by Thom Holwerda published by Expert Zone is perhaps the best, precisely because the author deepens Loli-Queru's arguments:

What we are dealing with here is an issue that completely contradicts the traits given to the OSS community by its members. Open-source advocates claim that OSS projects better fulfil the needs of their users, respond to and fix bugs faster, and generally respect the users more than their closed-source counterparts. While this might be true in some cases, it obviously is not an inherent trait of OSS...

(...)

Now, as I said, I'm a fan of Gnome, it's my DE of choice on any *nix system. However, when you clearly claim that you have a "user first" philosophy, how on earth can you state that the developers will only implement features that they themselves deem necessary? That's a very big contradiction. Very big. All that was asked for in that discussion was a means for users to influence the direction in which their beloved DE goes —exactly, perfectly complying with the "user first philosophy".

It all boils down, it seems, to the following key points made by Holwerda:

A lot of people claim that because the Gnome developers mostly work on the project in their free time, you should be grateful with what you get. This excuse would definitely be valid, were it not for the fact that Gnome is not a hobby project; millions of people depend on it everyday. Also, Gnome goes to great lengths to, rightly so, emphasise that they are the DE of choice for most corporate Linux desktops (Red Hat, Novell Linux Desktop, Sun's JDS) and leading user's distributions (Fedora, Ubuntu). With that comes responsibility. And that responsibility is something you have to bear on your shoulders; if you won't, or can't, then make way for someone who wants, or can.

All developers wishing to take part of the Gnome community automatically inherit this responsibility; it's not something you can choose to ignore simply because you only want to work on the features you find exciting for yourself. That's not only a tad bit selfish, but also a tad bit ignorant. Not only millions of ordinary users depend on you, but also big corporations deploying Red Hat and Novell Linux Desktop. If they (these oh-so-important companies running Linux/Gnome) see a lack of interest from the developers to fulfil their wishes —then say bye-bye customers.

In other words, the key problem is that open source software is not just for hobbysts anymore. There is simply too much invested in it, including the money of several large companies (not to talk about many people's hopes for change and putting an end to Microsoft's deadening monopoly). Yet, we all behave as if nothing changed and this is still some sort of playground. Like it or not, projects such as the Linux kernel or GNOME are too central to many people's lifes to simply continue adopting the old development method of scratching our own itches. Nobody doubts that philosophy can still apply to smaller applications, but there are certain projects that are just too central and too important to take such an amateurish approach. After all, it is precisely this mindset that led to the sad state of affairs described by Marcus Ranum on another article that I mentioned here recently. {link to this story}

[Tue Mar 15 16:08:28 CST 2005]

I just came across a couple of interesting news pieces today. First of all, it has been reported that Fedora is taking off in the web server market as Red Hat enterprise solutions decline. The data comes directly from Netcraft, which also reports some increase in Debian and SUSE deployments, among others. It is not clear if they are comparing the latest RHEL's numbers to their own previous statistics or rather to the deplooyment of the old Red Hat Linux low-end distribution. If the former, I would say it is definitely something for Red Hat to worry about; if the latter, it is just common sense and fully expected. Also on a somehow related note, ZDNet UK reports that some German Government sites switched to Debian after the SUSE's takeover by Novell. This (the political argument) is something that has often been ignored in the analysis I have read on open source, but it just makes sense that certain countries try to take advantage of the new situation to decrease their overdependency on American technology. {link to this story}

[Mon Mar 14 16:23:08 CST 2005]

Steve Langasek, the Debian release manager, has proposed to drop many architectures currently supported by the project in order to solve its well known release problems. Actually, the propose cut would be so drastic that, if approved by the community, Debian would only support four architectures: x86, AMD64, PowerPC and IA-64. Yes, this will generate a lot of controversy, but it is hard to see another solution to Debian's current release problems. It is simply not possible to continue the project under these conditions, where their stable release is lagging behind other major distributions by a matter of years. I fully understand that is not such a big issue on the server side, but it definitely does make a difference on the desktop side. After all, who wants to run a Linux distribution still based on GNOME 1.4 these days? Heck, even on the server side, it would be pretty normal for a system administrator to expect to run PHP version 4, for instance. Yes, I know, many other people set up alternative repositories out there, and that helps. Still, it is just common sense. Why maintain so many architectures when they are barely used in the community? Langasek's proposed method (officially support and maintain only a subset of those architectures, and allow the other ones to exist in some form of unofficial manner) simply makes sense to me. {link to this story}

[Sun Mar 13 20:19:56 CST 2005]

We have heard many times the story of how the UNIX camp became too fragmented in the early 1990s and, as a consequence of too much infighting, lost the war to Microsoft. Now, an expert in systems security and long time UNIX user, Marcus Ranum, warns us that Linux could suffer the very same fate. One may be tempted to consider this a brainless rant, especially since Linux and open source appear to be in a winning streak but Ranum has a point.

In 1985, when I wrote code for my UNIX machine, it worked on all the other UNIX machines because there was basically a single flavor of UNIX, which all used the same compiler, and everything just worked. Today, you actually have to be quite careful if you want to write code that compiles and works correctly on Solaris, Linux, and BSD. Indeed, most open source packages now include special tools that dynamically reconfigure the code based on complex knowledge bases that encode the differences in how Solaris says "tomato" and Linux says "tohmato."

It takes longer to configure code than to compile it these days, which is categorically not the case on Windows. Commercial grade Windows software just works and usually keeps working. Do you think that this might, just maybe, have something to do with the reason that major apps like Adobe's Photoshop, Macromedia's Director, Adobe Premiere, etc, are still not available on UNIX and, in my opinion _never_ will be... [Usually, this is the point where someone jumps up and yells "Photoshop runs on Macs - and Mac OSX is UNIX!" That's true, but it doesn't count. Photoshop was coded to the MacIntosh user interface, not X-windows, and functions on OSX as a side-effect of the excellent backwards-compatibility that Apple slavishly built into their kernel-swap.]

(...)

The early days of the Linux movement was heralded with grand pronouncements of war to the death with Microsoft —war from the desktop to the data center, and a free, compatible high performance alternative to Windows. What I see now is that the open source movement is more like a 14-year-old punk standing in the street telling Mike Tyson that he had an a**-whipping coming. Not the Mike Tyson we see today, either, but the Mike Tyson who could deliver a line-straight punch and knock a hole through the side of a steel I-beam.

There is little doubt Linux is a serious competitor against Windows in the server market, but whether or not it ever poses a direct threat to it on the desktop is a different issue. I must say I do like what I am seeing lately in the form of Fedora and Ubuntu but certain applications are still missing, and I still hear many friends who have no trouble installing or running Windows getting into trouble configuring even these user-friendly distributions. {link to this story}

[Fri Mar 11 15:30:01 CST 2005]

I am glad to read that the preview of Ubuntu 5.04 is out, including among other things GNOME 2.10, X.org 6.8.2, several new desktop package management tools to make it even easier to stay up to date and... kickstart compatibility for automatic installation! This last featurea is a godsend, I think. As a Red Hat refugee, if there is something I quite miss in the Debian camp is this kickstart technology. I simply cannot see how it would be possible to do serious consulting in the SMB market without something like it. Sure, there are alternatives such as System Imager and FAI but they are just not the same by any stretch of the imagination, as proven by the fact that both the Ubuntu and Progeny folks felt compelled to "port it" to their own distros. {link to this story}

[Thu Mar 10 11:49:54 CST 2005]

Maui X-Stream has finally released their CherryOS that became center of attention a few months back when they promised a new and revolutionary product to emulate the G4 architecture that supposedly would not be negatively affected by performance issues, as it tends to be the case. Well, as many suspected, it turned out to be little more than a re-branded PearPC with some closed-source changes. Needless to say, this brings up a few legal issues since Maui X-Stream's actions may be against the GPL. In any case, at least it had the unintended consequence of calling people's attention (including mine) to this little open source project that few knew anything about. Check out the PearPC screenshots page. My guess is that, just like other similar emulation environments, PearPC is slow like molasses, but who knows, perhaps it will be a nice surprise. {link to this story}

[Thu Mar 10 09:35:34 CST 2005]

GNOME 2.10 has been released. Check the release notes for more information about new features and improvements. Among my favorites is an improvement on the window behavior:

In the past, while typing something into one application when suddenly your instant messenger offered a chat request from your friend, your words would be typed into the chat window. Imagine if you were typing your password at the time. This should no longer happen in GNOME 2.10.

In addition, if an application takes a long time to start, your work will not be interrupted when it finally opens its window.

{link to this story}

[Wed Mar 9 21:09:16 CST 2005]

Nobody can deny that Steve Jobs has put Apple back on the map once again right when many people (including me) thought that it was already dead. Now it looks as if Apple is competing hard in the server market too, which is totally new for them. I can attest to that, since a few of my friends who work as sysadmins for printing shops and companies in the graphics and video editing or multimedia/gaming businesses already told me they are seeing a growing number of Xserve machines at their workplaces. In general, they seem to like them, although these systems certainly appear to miss the management and debugging tools other open source OSes have, not to talk about commercial Unices. Still, they are a nice deal: a decent, easy to use OS based on open standards that is easy to integrate with Windows, Linux and UNIX boxes and, on top of that, the hardware has a great price when compared against Dell servers with similar specs. All of that without a need to lose any sleep over frightening viruses and eternal security holes. It certainly sounds like a winner to me. Now, where is my Mac? {link to this story}

[Mon Mar 7 14:53:23 CST 2005]

Novell announced today that the next release of its SUSE desktop will include Beagle, the desktop search tool wrote by the Mono guys. Also, a few days ago I came across a short piece where the author wondered why there is not such a thing as a SUSE community to speak for when, at least since Red Hat switched gears and called off its low-end Linux distribution, all the pre-conditions appear to be there for them to take over. In reality, it looks as if SUSE was not able to capitalize on the discontent over Red Hat's recent decisions, and instead it has been the likes of Fedora and Debian that benefitted from it in the form of a wider community support. It is, nevertheless, surprising how little tracking SUSE ever got here in the US at least. Things may be quite different in Europe perhaps, but it is not easy to come by hardcore SUSE users over here, perhaps because their approach, as in the case of Caldera, was "too commercial" since the very beginning. While other distributions could be downloaded for free from the Internet, SUSE never made it easy to for the so-called hobbysts. Mind you, the distribution is rock solid and very well put together (one of those products where "things just work"), but technical merit is not always what matters. {link to this story}

[Mon Mar 7 09:10:58 CST 2005]

Linux.com publishes an article about podcasting from Linux with plenty of good information on how to broadcast audio streams using your Linux box. I believe we follor a similar process at SGI to get some of our meetings on disk and distribute the audio streams internally. It works just fine. {link to this story}

[Fri Mar 4 14:03:38 CST 2005]

A couple of stories related to the Linux kernel. First of all, Alan Cox gave a chat at FOSDEM where he talked about his experiences working with Linus, providing some insights:

Linus is a good developer, but is a terrible engineer. I'm sure he would agree with that. (...) Linus has this bad habit of fixing security holes quietly. This is a bad idea as some people read all the kernel patches to find the security holes.

On a related note, KernelTrap carries a discussion on kernel release numbering issues that is worth a read:

In his recent email, Linus explained, "the problem with major development trees like 2.4.x vs 2.5.x was that the release cycles were too long, and that people hated the back- and forward-porting. That said, it did serve a purpose —people kind of knew where they stood, even though we always ended up having to have big changes in the stable tree too, just to keep up with a changing landscape." His new proposal involves still using an even and odd numbering scheme, but on a smaller level. Thus, the upcoming 2.6.12 would be "stable" in that it should only contain bugfixes over 2.6.11. Then 2.6.13 would be more development oriented, including some larger changes. These larger changes would again stabalize in 2.6.14, and so on. He adds, "we'd still do the -rcX candidates as we go along in either case, so as a user you wouldn't even _need_ to know, but the numbering would be a rough guide to intentions. Ie I'd expect that distributions would always try to base their stuff off a 2.6.<even> release."

During the discussion, Linus notes that he's not unhappy with how 2.6 development is going, "I _do_ believe the current setup is working reasonably well." Instead, he suggested that he hopes to placate those that have requested additional stability, "although those people seem to not talk so much about 'it works' kind of stability, but literally a 'we can't keep up' kind of stability (ie at least a noticeable percentage of that group is not complaining about crashes, they are complaining about speed of development)."

The story contains plenty of pointers to the discussion threads from the lkml list. In any case, in the end it looks as if Linus decided against using the new odd/even numbering system and is instead moving to a more fine-grained 2.6.x.y system. According to this new method, Linus will be in charge of the 2.6.x releases as well as the -rc releases in between while work will continue on the 2.6.x.y releases. {link to this story}

[Tue Mar 1 14:10:03 CST 2005]

Enrico Zini sent an email to the debian-devel list with some information on how he managed to put /etc under revision control using svk:

# Install svk
apt-get install svk
         
# Initialize a depot in /root/.svk
svk depotmap --init
                 
# Import /etc making it a working copy
svk import --to-checkout //etc /etc
                          
# Make your depot not that readable
chmod -R go-rwx ~/.svk
                                    
# Remove volatile files from revision control
cd /etc
svk rm -K adjtime ld.so.cache

{link to this story}