[Fri Jan 27 10:14:53 CST 2006]

For the most part, Slashdot discussions are full or moronic comments and misinformed opinions, but every now and then there is a good nugget of information. It happened to me this morning when checking out a recent story on Intel's and HP's committment to invest US $10 billion to boost Itanium:

> "Clusters are only really good for embarassingly parallel problems."

You are overall sort of right about that. It's just science isn't standing still and most of new algorithms are specifically invented to be parallelized.

Most problems of physics are solved with matriecs. And matrices are of course are easy to parallelize. And physics —is the only who are buying most of big iron anway.

Nowadays, most of the weather predictions tasks, astronomical tasks, optical tasks, micromeasurementes tasks are also opitmized for clusters —not big iron. It's not about top performance —it's about price/performance ratio. For the same money people can buy cluster with e.g. 10 times more raw performance to run unoptimal (e.g. 2-3 times slower) algorithms —but task are done quicker. And cheaper: Yeah, clusters have higher latencies —but they are still dominated by batch jobs, not interactive jobs. Big Iron has better interconnects —but the redundant interconnects take lion share of such system costs.

In fact, the main reason why this have happened (clusters took over big iron) is the RAM prices. In my versity times (early-mid 90s), we all were occupied with shared memory problem: RAM was very expensive. Now people go to the general store, pick several 1 GB nics, pick several GBs of RAM, and pay a nickel for all that.

Ask anyone in Computer Science now, everyon started throwing RAM at latency problems of clusters. It does look bad on paper and in theory. But in practice is just works.

P.S. On topic. IA-64 has great performance. But again, on price/performance scale it loses immediately to Intel's own Xeons and AMD's Opterons. Intel constantly refuses to amend its Itanic focus from features to focus on more affordable prices. The story line was quite well covered by The Register: Check posted links.

There is some food for thought there. And, sure, there are still cases where a given user will definitely need the extra inch of performance that a particular platform can provide, but in reality that niche market may just be way too small to make money out of it unless one has a very lean company with low costs. {link to this story}

[Fri Jan 27 09:28:37 CST 2006]

Much has been said in the last few years about the end of UNIX, so Information Week dedicated the cover story of his latest issue to the topic What's Left Of Unix?. Here is a snippet of information to put things in perspective:

Fewer systems are being shipped, but they're commanding a higher premium thant ever. Unix still represents a $2 billion market, the largest operating system market by far. Despite Windows Server recent gains, it still represents about $1.6 billion, when you're looking at operating system-only revenues. And Linux in terms of revenues represents one-tenth of what the good, gray Unixes combined represent. Granted the future belongs to Linux, but as a $2 billion market, is Unix dead?

As usual, things are more complex than it seems, although the trend towards Windows and Linux appears to be clear. {link to this story}

[Thu Jan 19 15:51:00 CST 2006]

Yesterday, I read that the nice fellows at Ubuntu are considering to launch some sort of .Mac site. I suppose the whole thing started when journalists noticed a proposal on the Ubuntu Wiki to create an Ubuntu.Mac that would include webmail via IMAP, calendar, an address book, blogging and news reading capabilities. Who knows? It may even work out in the end. Actually, Ubuntu is, as far as I can see, the only Linux distribution out there capable of building a decent home desktop with an attractive price tag (i.e., free). Red Hat and Novell only seem to be interested in the enterprise market which, I believe, may be a mistake. I understand they currently make money only on the server, and nobody can blame them for thinking that only companies will be willing to pony up and pay for an operating system that, after all, is freely downloadable off the Internet. In this sense, however, Ubuntu appears to be out-redhatting Red Hat itself. Their business model is quite similar (make money in support and services) but with a big difference in approach though (the OS itself is still free; not a "community release" but rather the very same release companies and paying customers get). Will they make the model work? We will have to wait and see. However, there are plenty of governments out there that are switching to Ubuntu already, even if they have to pay some amount of money for the support and other services. After all, no matter how much money they pay for that, the amount is still significantly lower than if they chose Red Hat or SUSE. Of course, in order for the model to work, Ubuntu needs either to build a huge volume or diversify into other areas. This Ubuntu.Mac idea may go precisely in that direction.

By the way, if, like me, you are an Ubuntu fan, you will love this Ubuntu Guide document. {link to this story}

[Wed Jan 18 08:41:13 CST 2006]

I just came across an interesting paper on high-performance SSH/SCP that includes some patches to apply to different versions of OpenSSH in order to increase its overall performance. According to the abstract:

SCP and the underlying SSH2 protocol implementation in OpenSSH is network performance limited by statically defined internal flow control buffers. These buffers often end up acting as a bottleneck for network throughput of SCP, especially on long and high bandwidth network links. Modifying the ssh code to allow the buffers to be defined at run time eliminates this bottleneck We have created a patch that will remove bottlenecks in OpenSSH and is fully interoperable with other servers and clients. In addition HPN clients will be able to download faster from non HPN servers, and HPN servers will be able to receive uploads faster from non HPN clients. However, the host receiving the data must have a properly tuned TCP/IP stack. Please, refer to this tuning page for more information.

The amount of improvement any specific user will see is dependent on a number of issues. Transfer rates cannot exceed the capacity of the network nor the throughput of the I/O subsystem including the disk and memory speed. The improvement will also be highly influenced by the capacity of the processor to perform the encryption and decryption. Less computational expensive ciphers will often provide better throughput that more complex ciphers.

I still have not given it a whirl, but it sounds intriguing. My guess though is that CPU utilization should also go up after applying these patches, since it will need to manage more buffers now. As such, they may not make it to the mainstream OpenSSH distribution for a while, but it may be well worth a try. {link to this story}

[Mon Jan 9 16:48:21 CST 2006]

So, a few minutes ago I was reading part 1 of Why not Python?, by Collin Park, recently published on Linux Journal when I noticed the try/except construct he uses in one of his scripts:

 1   import sys
 2   MoreCoconuts ="Whatever"
 3
 4   for Coconuts in range(5,99999):                 #A
 5       Pile=Coconuts                               #B
 6       try:
 7           for dummy in range(0,5):                #C
 8               if (Pile%5)!=1: raise MoreCoconuts  #D
 9               Pile=(Pile-1)/5*4                   #E
10           if Pile%5==0:                           #F (skip G if nonzero)
11               print "Victory! "+`Coconuts`        #G
12               sys.exit()
13   
14       except MoreCoconuts:
15           continue

Park explains himself what that means:

The try/except construction may be familiar to C++ programmers. The basic idea is that anywhere in the "try" block, you can "raise" an exception, which is caught in the "except" block. Uncaught exceptions go to the next enclosing "try". If uncaught there, they propagate to the Python runtime system, which typically terminates your program.

Let us put it a different way then: what is the difference between this try/except construct and the much maligned goto? Sure, there are some slight differences, but their essence is just the same. Actually, the arguments so many times used against the infamous goto all apply to this too, as far as I can see. Is this some sort of geeky double standards? What is it exactly that makes a feature into a sin when it comes to a given programming language, and then it suddenly becomes a good idea in a different language? {link to this story}

[Mon Jan 9 10:03:30 CST 2006]

InformIT publishes an article by Owen Linzmayer listing the top ten things he hates about MacOS X. No matter how much I like the OS, I must agree with some of them too: dock items bouncing indefinitely, issues with the trash app, annoying software updates app that can only be used for Apple's own updates (yeah, I find this annoying in the case of Windows too), the fact that Dashboard widgets are modal (i.e., you cannot use them individually, but rather have to get into Dashboard mode in order to use any of them, the impossibility to rename files from within the sidebar view... Clearly, MacOS X is the closes thing out there to a truly user-friendly OS but it still has some room for improvement. By the way, OSNews also published a review of Apple's latest iMac G5 that is well worth a look. Here are the reviewer's conclusions:

At the beginning of this review, we asked the question whether or not the iMac G5 is worth its pricetag. I can answer that question with a firm, confident, and solid "yes". The iMac G5 in its ucrrent form is, in my opinion, the best personal computer for home and/or office use that money can buy. If you compare the iMac to what Apple's competitors have to offer, it becomes painfully clear just how far Apple is ahead when it comes to making user-friendly, visually attractive, and plug-and-play home computers.

{link to this story}

[Mon Jan 9 09:19:37 CST 2006]

Well, so much for the big rumors about Google. In the end, Larry Page did show up at CES on Friday but the products he announced were, to be honest, a far cry from the frontal assault on Microsoft we heard rumors about. It basically boils down to two quite routinary things: Google Video, a store where users will be able to purchase video content such as prime-time and classic TV shows, Sony BMG music videos and historical content from ITN, among other things; and Google Pack, a collection of software packages that includes things like Google Desktop, Google Earth, Norton AntiVirus 2005 Spcial Edition, Adobe Reader 7, etc. In other words, nothing earth-shattering. Ars Technica has a nice piece on the announcement and Paul Thurrot's SuperSite for Windows also has a great review of the Google Pack. {link to this story}

[Sun Jan 8 18:41:05 CST 2006]

A few weeks back, I came across an article by Curt Monash published by Computer World reivindicating the concept of the diskless PC, albeit with a new and intriguing approach, I think.

Here's how the basic hardware setup could develop: Instead of relying on fixed disks, PCs would have ports for two to four or more flash drives. One or two would hold the operating system and most of the programs. The others would be focused mainly or entirely on data. And these flash drives would be portable from system to system, although there might be a partial exception for small devices such as phones, cameras, personal organizers or music players. But even if they kept a little onboard storage, it could be loaded from and backed up to a flash drive fitting into at least one port.

This concept has one huge difference from most other diskless PC plans: Rather than all living on a big server in the cloud, your personal data (and software) would never leave your custody. Thus, issues of network reliability, service provider lock-in, service-provider privacy safeguards and so on would all be mitigated. What's more, migration is almost a nonissue; older fixed-disk computers with USB ports fit into that diskless world perfectly well.

Now, Monash may be onto something here. The problem with the old thin-client concept is precisely what he points out above: over-dependency on the reliability of the network; an extreme centralization with all that means of added complexity and, above all, making the whole system depend on a single point of failure; difficulties to adapt to a world where mobility is the number one characteristic of business and workforce; slow connections... However, Monash's diskless station proposal seems to be a win-win situation: both data and applications (heck, including even the operating system itself) travel with the user, the system is completely decentralized, and virtually any system out there with a USB port and plain vanilla hardware will do. As a matter of fact, taking into account the latest advances in USB flash drive technology where a 4GB portable flash memory drive costs less than US $340, moving towards a world where many employees carry around one of these with a customized Linux distribution and another smaller one containing data makes a lot of sense, especially for many businesses out there, not to talk about freelancers, sales people and other professionals.

By the way, for the adventurous souls out there, here are some directions to install Ubuntu on an external USB drive. It would not surprise me if we see this more and more often in the near future. I am convinced Monash's vision just makes sense. {link to this story}

[Fri Jan 6 07:36:09 CST 2006]

A good friend of mine sent me a link to the Web Economy Bullshit Generator, which should definitely prove quite useful at my next All Hands. The poor imagination of most managers never stops amazing me. It is not only the sheer mediocrity that gets to my nerves, but also the way they stick to these ready-made empty concepts always following the latest fad. Yes, there are good managers, but sadly they are the exception. {link to this story}

[Tue Jan 3 15:15:55 CST 2006]

The rumors about Google's secret plans continue unabated.

Google will unveil its own low-price personal computer or other device that connects to the Internet.

Sources say Google has been in negotiations with Wal-Mart Stores Inc., among other retailers, to sell a Google PC. The machine would run an operating system created by Google, not Microsoft's Windows, which is one reason it would be so cheap —perhaps as little as a couple of hundred dollars.

Bear Stearns analysts speculated in a research report last month that consumers would soon see something called "Google Cubes" —a small hardware box that could allow users to move songs, videos and other digital files between their computers and TV sets.

Larry Page, Google's co-founder and president of products, will give a keynote address Friday at the Consumer Electronics Show in Las Vegas. Analysts suspect that Page will use the opportunity either to show off a Google computing device or announce a partnership with a big retailer to sell such a machine.

And that's not the only Google theory out there. Content produces wonder whether Google's push into video search will unravel the economics that make Hollywood hum. If viewers can find and legally download an episode of Seinfeld through Google, will that cut into cable and network television's profits?

And what if Google, after equipping cities, starting with San Francisco, with Wi-Fi wireless technology, starts to offer pay-TV service for free?

The funny thing about Google is that it has everyone right now running around like a bunch of paranoic banshees, which I find it quite interesting. I mean, after all, they have not confronted Microsoft or any of the other biggies so far. Yet, many top executives appear to be going through plenty of sleepless nights just thinking about the mere possibility that Google may decide to get into their business. Ah, the power of the Web! {link to this story}

[Tue Jan 3 11:06:38 CST 2006]

I read that a British firm is about to release a Firefox P2P extension that has already become a sensation among bloggers out there. The idea sounds quite exciting. We'll see how it goes, although security might be an issue, as it tends to be the case with this sort of applications. What amazes me is that more such apps have not been released to the public, especially since Mozilla/Firefox was designed from the bottom up to be not just a browser, but also a development platform. There are quite a few extensions, to be fair, but most of them are quite simply (although useful) in nature. I do not know of any major app built on top of the Mozilla development framework, perhaps with the exception of the Komodo IDE, if I remember correctly. {link to this story}