Solène Rapenne on Qubes OS
[Thu Nov 21 11:29:29 CST 2024]

Yesterday, I shared one of her blog pieces on OpenBSD, and today it's another blog entry by Solène Rapenne on Qubes OS. So far, it's the best overview of what Qubes OS is, and how it works. She also shares a pretty good list of pros and cons, together with a quick summary of her own workflow. While I do understand the approach taken by Qubes OS sounds, perhaps, like overkill, I find it interesting, and may actually give it a try someday. The idea of keeping different uses of the computer completely separated appeals to me. {link to this entry}

Reasons to stop running OpenBSD... at least on the desktop
[Wed Nov 20 13:58:48 CST 2024]

Today, I came across a blog post titled Why I stopped using OpenBSD written by Solène Rapenne, who has contributed to the OpenBSD project. So, it cannot be said that this is the opinion of a rabid anti-BSD fanatic. What's worse to those of us who like OpenBSD, a good part of her arguments are quite solid, at least when it comes to running the OS on a desktop computer. Among other things, she has issues with the lack of Bluetooth support, battery life and power usage, limited support for virtual machines, an outdated filesystem, and frequent crashes. Perhaps more interesting, Rapenne reports that she decided to switch to Qubes OS, and likes a few things about Linux: namespaces, cgroups, swap compression, auditd, SELinux, and yes, even systemd and flatpak. Needless to say, those two definitely came as a big surprise to many readers, including myself. Well, especially the case of systemd. Mind you, I also agree there are things to like about it. But I don't like its overall size, the fact that it takes over systems and, above all, that increasingly Linux users simply have no choice when it comes to the init system they run. {link to this entry}

Why not Bluesky?
[Tue Nov 19 19:43:36 CST 2024]

Quite a few people I know have been talking about Bluesky lately. They are convinced it's the best alternative to X/Twitter now that Elon Musk appears to be ruining the experience for millions of users. I'm not a big social media type of guy. Yes, I did use Facebook in the second half of the 2000s, but there were some good reasons for that. I had recently moved to Spain with my family, and it was an easy way to keep in touch with my friends and relatives in two different Spanish cities and three different countries. By 2012 or so, I started slowing down quite a bit, and barely use it these days. I know Messenger is still the only way to communicate with a few people, and still use it for that purpose. But that's about it. Other than that, I tend to use RSS on my own self-hosted service to follow the news, manually visit a few websites and, from time to time (not everyday), I also check Lemmy and Reddit (in that order, which means that quite often I simply don't check Reddit at all). Anyways, I have to admit that some of the things the Bluesky folks are putting together sound intriguing, straightforward, direct and honest. However, I'm old enough that I've been burned a few times, and just cannot trust a platform so dependent on a single corporate entity that, truly, let's be honest, is keeping me behind high walls and constantly threatening to put an end to my relationship with whoever I meet on it. As Tim Bray recently put it on a blog entry titled Why Not Bluesky, there are three main criteria that we can use to judge a social platform: technology, culture, and money. And, as he put it regarding this last element:

Here’s the thing. Whatever you think of capitalism, the evidence is overwhelming: Social networks with a single proprietor have trouble with long-term survival, and those that do survive have trouble with user-experience quality: see Enshittification.

Of course, the people who launched Bluesky have great intentions. And yes, they swear to God that they won't betray their users like other platforms did before. But we've heard this before, right? In my case, I was burned in a similar way when Red Hat put an end to Red Hat Linux. For a few years, I switched to Ubuntu until I realize I was still subject to the whims of a corporation. That's when I switched to Debian, and never looked back. Best thing I did in my life. Well, something similar applies to this other topic, I think. As Cory Doctorow explains on a piece titled Bluesky and enshittification:

But here's the thing: all those other platforms, the ones where I unwisely allowed myself to get locked in, where today I find myself trapped by the professional, personal and political costs of leaving them, they were all started by people who swore they'd never sell out. I know those people, the old blogger mafia who started the CMSes, social media services, and publishing platforms where I find myself trapped. I considered them friends (I still consider most of them friends), and I knew them well enough to believe that they really cared about their users.

They did care about their users. They just cared about other stuff, too, and, when push came to shove, they chose the worsening of their services as the lesser of two evils.

Like: when your service is on the brink of being shut down by its investors, who demand that you compromise on privacy, or integrity, or quality, in some relatively small way, are you really going to stand on principle? What about all the users who won't be harmed by the compromise, but will have their communities and online lives shattered if you shut down the company? What about all the workers who trusted you, whose family finances will be critically imperilled if you don't compromise, just a little. What about the "ecosystem" partners who've bet on your service, building plug-ins, add-ons and services that make your product better? What about their employees and their employees' families?

Maybe you tell yourself, "If I do this, I'll live to fight another day. I can only make the service better for its users if the service still exists." Of course you tell yourself that.

I have watched virtually every service I relied on, gave my time and attention to, and trusted, go through this process. It happened with services run by people I knew well and thought highly of.

So, what is the solution then? To me, it's pretty clear. It was invented a long time ago. As Bray himself explains:

The evidence is also perfectly clear that it doesn’t have to be this way. The original social network, email, is now into its sixth decade of vigorous life. It ain’t perfect but it is essential, and not in any serious danger.

The single crucial difference between email and all those other networks — maybe the only significant difference — is that nobody owns or controls it. If you have a deployment that can speak the languages of IMAP and SMTP and the many anti-spam tools, you are de facto part of the global email social network.

The definitive essay on this question is Mike Masnick’s Protocols, Not Platforms: A Technological Approach to Free Speech. (Mike is now on Bluesky’s Board of Directors.)

No, of course I'm not proposing that everybody should use email instead of social media. What I'm saying is that only a social media platform that is as open as email can truly be trusted. A platform that allows an infinite amount of client implementations, and that also allows users to simply move their data elsewhere without being hindered by proprietary formats and standards that are not interoperable. This already exists, and it has been in use for decades. It's not an utopia. Not only that, but there are services that make money offering services on top of it. Perhaps Bluesky will end up building that type of social media platform. But, at the moment, it's not there yet. When they finish building it (if they ever do), I may consider signing up with them. {link to this entry}

Most social media users don't read beyond the headline
[Tue Nov 19 18:57:55 CST 2024]

I'm afraid we all suspected this already, but there is now evidence that the vast majority of social media users don't read beyond the headline before sharing a link:

In an analysis of more than 35 million public posts containing links that were shared extensively on the social media platform between 2017 and 2020, the researchers found that around 75% of the shares were made without the posters clicking the link first. Of these, political content from both ends of the spectrum was shared without clicking more often than politically neutral content.

Oh, surprise! Right? Obviously, links to politcally charged pages are the worst:

"A pattern emerged that was confirmed at the level of individual links," Snyder said. "The closer the political alignment of the content to the user—both liberal and conservative—the more it was shared without clicks. … They are simply forwarding things that seem on the surface to agree with their political ideology, not realizing that they may sometimes be sharing false information."

The researchers found that these links were shared over 41 million times, without being clicked. Of these, 76.94% came from conservative users and 14.25% from liberal users. The researchers explained that the vast majority—up to 82%—of the links to false information in the dataset originated from conservative news domains.

According to the article, the researchers even put forward a possible way to solve the problem, at least in part:

To cut down on sharing without clicking, Sundar said that social media platforms could introduce "friction" to slow the share, such as requiring people to acknowledge that they have read the full content prior to sharing.

I'm afraid a lot of friction would have to be introduced to make a difference. Not only that, but it truly goes against the best interest of the social platforms, since it would interfere with "the flow". {link to this entry}

What's lost in an MP3 sound file?
[Tue Nov 19 11:54:24 CST 2024]

Now, this is interesting. Vox has a short article showing what is lost due to compression when creating an MP3 audio file. Since I'm not a purist (nor a hipster, if I may say so), it's not as if I care a lot about the loss. But it's interesting nonetheless. {link to this entry}

browsh: a modern text-based browser
[Mon Nov 18 14:33:13 CST 2024]

Today, I came across browsh, a modern text-based browser that renders anything that a modern browser can. This is a key difference with regards to other options, such as lynx, links or w3m. I still didn't give it a try, but it looks promising. On top of that, they have Debian binaries available on their website. {link to this entry}

Large Language Models (LLMs) may not be fit for real-world use
[Sat Nov 16 14:10:05 CST 2024]

Amid so much AI hype, it's not easy to make an effort to remain moderately skeptical, at least. Mind you, I'm not speaking about opposing the technology itself, much less spreading conspiracy theories about it. But sometimes I get the feeling that all the hype over AI is making it increasingly difficult to stand for common sense, reason and moderation. So, it came as a bit of a surprise to read about a recent research piece that argues that perhaps LLMs may not be fit for real-world use:

Generative artificial intelligence (AI) systems may be able to produce some eye-opening results but new research shows they don’t have a coherent understanding of the world and real rules.

In a new study published to the arXiv preprint database, scientists with MIT, Harvard and Cornell found that the large language models (LLMs), like GPT-4 or Anthropic's Claude 3 Opus, fail to produce underlying models that accurately represent the real world.

When tasked with providing turn-by-turn driving directions in New York City, for example, LLMs delivered them with near-100% accuracy. But the underlying maps used were full of non-existent streets and routes when the scientists extracted them.

The researchers found that when unexpected changes were added to a directive (such as detours and closed streets), the accuracy of directions the LLMs gave plummeted. In some cases, it resulted in total failure. As such, it raises concerns that AI systems deployed in a real-world situation, say in a driverless car, could malfunction when presented with dynamic environments or tasks.

Or, as the lead author of the research piece, Keyon Vafa, warns:

"I was surprised by how quickly the performance deteriorated as soon as we added a detour. If we close just 1 percent of the possible streets, accuracy immediately plummets from nearly 100 percent to just 67 percent," added Vafa.

This shows that different approaches to the use of LLMs are needed to produce accurate world models, the researchers said. What these approaches could be isn't clear, but it does highlight the fragility of transformer LLMs when faced with dynamic environments.

In other words, it's just a tool, folks. It's neither a miracle nor the devil. Just a tool. Let's calm down a bit. {link to this entry}

Steve Jobs, NeXTSTEP, and early object-oriented programming
[Thu Nov 14 14:00:13 CST 2024]

The folks at the Computer History Museum published a fascinating piece on the connections between Steve Jobs, NeXTSTEP, and object-oriented programming. After an overall introduction to the topic where they explain the beginnings of the modern GUI and its connections to Xerox PARC, Steve Jobs, Apple and NeXT, they go on to talk about the birth of object-oriented programming, Smalltalk and Objective-C:

Objective-C was created in the 1980s by Brad Cox to add Smalltalk-style object-orientation to traditional, procedure-oriented C programs.1 It had a few significant advantages over Smalltalk. Programs written in Smalltalk could not stand alone. To run, Smalltalk programs had to be installed along with an entire Smalltalk runtime environment—a virtual machine, much like Java programs today. This meant that Smalltalk was very resource intensive, using significantly more memory, and running often slower, than comparable C programs that could run on their own. Also like Java, Smalltalk programs had their own user interface conventions, looking and feeling different than other applications on the native environment on which they were run. (Udell, 1990) By re-implementing Smalltalk’s ideas in C, Cox made it possible for Objective-C programmers to organize their program’s architecture using Smalltalk’s higher level abstractions while fine-tuning performance-critical code in procedural C, which meant that Objective-C programs could run just as fast as traditional C programs. Moreover, because they did not need to be installed alongside a Smalltalk virtual machine, their memory footprint was comparable to that of C programs, and, being fully native to the platform, would look and feel the same as all other applications on the system. (Cox, 1991) A further benefit was that Objective-C programs, being fully compatible with C, could utilize the hundreds of C libraries that had already been written for Unix and other platforms. This was particularly advantageous to NeXT, because NeXTSTEP, being based on Unix, could get a leg up on programs that could run on it. Developers could simply “wrap” an existing C code base with a new object-oriented GUI and have a fully functional application. Objective-C’s hybrid nature allowed NeXT programmers to have the best of both the Smalltalk and C worlds.

What value would this combination have for software developers? As early as the 1960s, computer professionals had been complaining of a “software crisis.” A widely distributed graph predicted that the costs of programming would eclipse the costs of hardware as software became ever more complex. (Slayton, 2013, pp. 155–157) Famously, IBM’s OS/360 project had shipped late, over-budget, and was horribly buggy. (Ensmenger, 2010, pp. 45–47, 205–206; Slayton, 2013, pp. 112–116) IBM produced a report claiming that the best programmers were anywhere up to twenty-five times more productive than the average programmer. (Ensmenger, 2010, p. 19) Programmers, frequently optimizing machine code with clever tricks to save memory or time, were said to be practitioners of a “black art” (Ensmenger, 2010, p. 40) and thus impossible to manage. Concern was so great that in 1968 NATO convened a conference of computer scientists at Garmisch, Switzerland to see if software programming could be turned into a discipline more like engineering. In the wake of the OS/360 debacle, CHM Fellow Fred Brooks, the IBM manager in charge of OS/360, wrote the seminal text in software engineering, The Mythical Man-Month. In it, Brooks famously outlined what became known as Brooks’ law—that after a software team reaches a certain size (and thus as the complexity of the software increases), adding more programmers will actually increase the cost and delay its release. Software, Brooks claimed, is best developed in small “surgical” teams led by a chief programmer, who is responsible for all architectural decisions, while subordinates do the implementation. (Brooks, 1995)

By the 1980s, the problems of cost and complexity in software remained unsolved. It appeared that the software industry might be in a perpetual state of crisis. In 1986, Brooks revisited his thesis and claimed that, despite modest gains from improved programming languages, there was no single technology, no “silver bullet” that could, by itself, increase programmer productivity by an order of magnitude—the 10x improvement that would elevate average programmers to the level of exceptional ones. (Brooks, 1987) Brad Cox begged to differ. Cox argued that object-oriented programming could be used to create libraries of software objects that developers could then buy off-the-shelf and then easily combine, like a Lego set, to create programs in a fraction of the time. Just as interchangeable parts had led to the original Industrial Revolution, a market for reusable, off-the-shelf software objects would lead to a Software Industrial Revolution. (Cox, 1990a, 1990b) Cox’s Objective-C language was a test bed for just such a vision, and Cox started a company, Stepstone, to sell libraries of objects to other developers.

But how would the object-oriented approach exactly help?

In the vision of both Cox and Jobs, the grunt work of making an application was offloaded to the developers of the objects; nobody in a small team needed to be a mere “implementer,” forced to work on the program’s “foundation.” Unlike procedural code units, it was precisely the black-boxed, encapsulated nature of objects—which prevented other programmers from tampering with their code—that enforced the modularity that allowed them to be reused interchangeably. Developers standing on any given floor simply were not allowed to mess with the foundation they stood on. Freed from worrying about the internal details of objects, developers could focus on the more creatively rewarding work of design and architecture that had been the purview of the “chief” programmer in Brook’s scheme. All team members would start at the twentieth floor and collaborate with each other as equals, and their efforts would continue to build upward, rather than be diverted to redoing the floors upon which they stood.

Alas, as it often happens, no matter how prescient their vision might have been, reality has a way to mess with our plans:

Looking back from the perspective of 2016, Steve Jobs was remarkably prescient. After Mac OS X shipped on Macintosh personal computers, small-scale former NeXT developers and shareware Mac developers alike began to write apps using AppKit and Interface Builder, now called “Cocoa.” These developers, taking advantage of eCommerce over the Web, began to call themselves independent or ”indie” software developers, as opposed to the large corporate concerns like Microsoft and Adobe, with their hundred-man teams. In 2008, Apple opened up the iPhone to third-party software developers and created the App Store, enabling developers to sell and distribute their apps directly to consumers on their mobile devices, without having to set up their own servers or payment systems. The App Store became ground zero for a new gold rush in software development, inviting legendary venture capitalist firm Kleiner Perkins Caulfield & Byers to set up an “iFund” to fund mobile app startups. (Wortham, 2009) At the same time, indie Mac developers like Andrew Stone and Wil Shipley predicted that Cocoa Touch and the App Store would revolutionize the software industry around millions of small-scale developers.

Unfortunately, in the years since 2008, this utopian dream has slowly died: as unicorns, acquisitions, and big corporations moved in, the mobile market has matured, squeezing out the little guys who refuse investor funding. With hundreds of competitors in the App Store, it can be extremely difficult to get one’s app noticed without expensive external marketing. The reality is that a majority of mobile developers cannot sustain a living by making apps, and most profitable developers are contractors writing apps for large corporations. Nevertheless, the object-oriented technology Jobs demoed in 1997 is today the basis for every iPhone, iPad, Apple Watch and Apple TV app. Did Steve Jobs predict the future? Alan Kay famously said, “The best way to predict the future is to invent it.” Cyberpunk author William Gibson noted, “The future is already here—it’s just not evenly distributed.” NeXT had already invented the future back in 1988, but because NeXT never shipped more than 50,000 computers, only a handful were lucky enough to glimpse it in the 1990s. Steve Jobs needed to return to Apple to distribute that future to the rest of the world.

Anyways, the article makes for an interesting read for anyone who cares about the history of computing. {link to this entry}

Installing GNOME on Debian after minimal install
[Sun Nov 10 14:58:44 CST 2024]

After performing a minimal install of Debian on an old Lenovo Thinkpad laptop I have laying around, I decided to install the GNOME desktop environment. The process is pretty sraightforward:

	# apt update

	# apt install task-gnome-desktop
	
{link to this entry}

Why systemd is a problem for embedded Linux (and beyond?)
[Tue Nov 5 11:38:07 CST 2024]

Much has been said about systemd. As in the case of the infamous editor war, the debate will rage on for years and years without ever reaching a clear conclusion, I'm afraid. Like everything else in life, systemd has pros and cons. For desktop computing, it may not be such a bad thing after all. In any case, Kevin Boone recently wrote an article on why systemd is a problem for embedded Linux that makes a lot of sense to me. After showing that memory use is definitely higher than with other alternatives (and, truly, memory use does matter in the case of embedded devices), he acknowledges that, no matter what, it offers an attractive solution to Linux distributions. It works reasonably well, and it comes in a single tool. Then, it speeds up the boot process, it handles process initialization on demand, and it does it in parallel too. Those are all advantages. But then, again, none of that applies to embedded devices, or at least not for the most part. Yet, more and more software is written in such a way that it can only work with systemd. It has quickly become the new standrad in the Linux world, like it or not. His conclusions are key:

I’ve found that many systemd components are less effective in an embedded environment than the traditional alternatives. I’ve shown some illustrative examples in this article, but I really don’t think there’s much controversy here: this simply isn’t the environment that systemd was designed for. But it’s getting increasingly difficult to find a mainstream Linux distribution that doesn’t use systemd – even Raspberry Pi distributions use it. As systemd absorbs more functionality into itself, there’s going to be little motivation to maintain alternatives. After all, if everybody uses systemd, what motivation is there to support anything else? My concern is that we’re moving towards a future where Linux is inconceivable without systemd. That will be a problem for those environments where systemd really doesn’t shine.

I wish I knew the solution to this problem. There’s no point complaining about systemd, because distribution maintainers have grown to like it too much. And, in any event, well-reasoned, technical concerns are drowned out by all the ranting and conspiracy theories. All we can do – if we care – is to continue to use and support Linux distributions that don’t insist on systemd, and stand ready to develop or maintain alternatives to those bits of Linux that it absorbs.

I must say I have to agree with Boone. Unlike other changes introduced in the past, it looks as if systemd removes personal choice and limits user freedom. That may be a first in the Linux and free software world. Time to reconsider the BSDs? {link to this entry}

Netscape's legacy
[Mon Nov 4 07:03:51 CST 2024]

ZDNet recently published an article on how Netscape still lives on 30 years later. I must say I keep a special place in my heart for the old Netscape Navigator browser. When I arrived in the US back in the mid-1990s, I quickly started working for a company called DecisionOne (well, prior to that, it was Bell Atlantic Business Systems Services). We provided technical support for ISPs when they were booming. When the Internet became a big thing. Then, DecisionOne got a contract to support Netscape products, and I joined that team. It was, truly, the beginning of my involvement in the field where I still work. And I loved it. I loved being a part of that digital transformation that was taking place before our eyes. From what I remember, DecisionOne's management didn't have to put much effort into getting us all very excited, not so much because of the company itself as due to the product that we supported. We felt part of a larger thing. An up and coming thing. An unstoppable wave that was way cool and was about to turn over the tables, so to speak. It was exciting. Without a doubt. In any case, as the article explains, Netscape gaves us lots of things. The author mentions only some. As he says, it gave us JavaScript, Secure Sockets Layer (SSL) (a predecessor of today's Transport Layer Security, or TLS), and cookies. And, of course, it also gave us Mozilla Firefox. As the author explains:

When Netscape saw the writing on the wall as its market share slumped, the company tried a desperate move: it open-sourced Navigator's code. Today's companies open-source their programs every day. Then, almost no one opened up their proprietary code. So, in 1998, Netscape became one of the first companies to release its source code. That code would eventually become Firefox.

The initial Mozilla codebase came directly from Netscape's browser code. While much of this code was eventually rewritten, it provided the starting point for Mozilla's development efforts.

That was also an exciting moment. At the time, I was still at DecisionOne working the third shift. By then, I was no longer involved with the Netscape account, and had moved onto other accounts. But I remember checking the Mozilla website every few minutes the night they released their source code, so I could download it, and give it a try compiling it. As the article explains, it's quite common to see companies releasing their source code these days. But things were quite different back then. What Netscape did, releasing the source code to their crown jewel, was unheard of at the time. Sure, it was a move born out of desperation. But it changed things..

Finally, as the author explains, their overall attitude was part of their appeal to us:

Netscape's innovative spirit and confrontational stance toward Microsoft embodied the ambitious ethos of many dot-com startups. This approach was about disrupting existing technologies and business models.

That also became a clear element in the identity of Silicon Valley. The dot-coms were seen as disruptors, as transformational forces. No wonder then that young people like us (at the time, of course) felt so excited about it all. It wasn't only the new up and coming thing. It was a revolution of sorts. {link to this entry}