When to use 'git checkout' vs. 'git switch'
[Thu Oct 31 13:49:11 CDT 2024]

All this time I've been using git checkout when switching to another branch. But, apparently, it is now possible to use git switch. In Understanding the Differences Between 'git checkout' and 'git switch', Greg Foster writes:

With the introduction of git switch in Git version 2.23, there has been some confusion about when to use each command. Historically, git checkout has been used for two main purposes: switching between branches and restoring working tree files. However, this dual functionality can lead to confusion and mistakes, especially for newcomers. To address this, Git introduced the git switch and git restore commands to separate these responsibilities.

I guess I should use git switch from now on. I never used git checkout to restore files to their original version. The author provides more details. {link to this entry}

Advice on git commit messages
[Thu Oct 31 09:40:47 CDT 2024]

Came across a blog piece with good advice on how to format git commit messages. Here is the gist of it:

	Capitalized, short (50 chars or less) summary

	More detailed explanatory text, if necessary.  Wrap it to about 72
	characters or so.  In some contexts, the first line is treated as the
	subject of an email and the rest of the text as the body.  The blank
	line separating the summary from the body is critical (unless you omit
	the body entirely); tools like rebase can get confused if you run the
	two together.

	Write your commit message in the imperative: "Fix bug" and not "Fixed bug"
	or "Fixes bug."  This convention matches up with commit messages generated
	by commands like git merge and git revert.

	Further paragraphs come after blank lines.

	- Bullet points are okay, too

	- Typically a hyphen or asterisk is used for the bullet, followed by a
	  single space, with blank lines in between, but conventions vary here

	- Use a hanging indent
	
For the most part, that's the model I've always followed. Not always, though. I must acknowledge that. It looks like a good advice to me. Oh, and, as a bonus, another blog entry, this one by Thorsten Ball, on how he uses git. That one is also full of good advice. I keep them both on my list of favorites. {link to this entry}

The "Zero Click Internet"? What the...?
[Wed Oct 30 18:46:38 CDT 2024]

Josh Tyler writes on TechSpot about the "Zero Click Internet". Say, the what?

The internet is in the midst of undergoing the biggest change since its inception. It's huge. And there is no going back.

The web is changing into the Zero Click Internet, and it will change everything about how you do everything. Zero Click Internet means you'll no longer click on links to find the content you want. In fact, much of the internet already works this way.

Platforms like Facebook and Twitter stopped promoting posts with links a few years ago. This forced users on those platforms to post content directly there instead of directing followers to external sites. And now with generative AI in the mix, platforms are even more incentivized to surface content directly, whether it's pulled from their databases or created by AI.

This phenomenon isn't entirely new; it began when Google started answering simple queries directly on its search results page. But it's escalated significantly with the rise of AI chatbots and advanced recommendation algorithms.

Google is now aggressively scraping content from websites and displaying it directly in search results. What few search results remain are buried so far down the page that almost no one sees or clicks on them anymore. And Google's plan is to bury them even further in coming months. And that's what a Zero Click Internet means. It means an end to users visiting websites, entirely.

Hmm, I'm not sure I like any of this, "cool" and modern as it all sounds. It basically means that we, Internet users, will relinquish all the power to Big Tech without much control over the data. Yes, it's all very automated. Yes, it's all very futuristic. Heck, it may even be very convenient. But what do you want to bet that none of these vendors will make it easy to migrate our data onto a competing platform or tool? So, that means we will be locked into whichever walled garden we use. It's social media on steroids. It's taking the old Internet, which originally appeared as a liberating force, let's not forget that, completely to the corporate realm. Definitely the opposite of the IndieWeb, which is an idea I like much better. After all, I've been putting together this personal website and writing notes on it for almost a quarter century now. And guess what? It may be inconvenient. It is quite oldfashioned and, as it could be expected from something that grew pretty much organically, it feels patchy. I understand that. But it's mine, and I can do pretty much whatever I want with it. I own the data. Not some large corporation somewhere. {link to this entry}

On why we get buggy software
[Wed Oct 30 11:46:05 CDT 2024]

Nick Hodges writes on InfoWorld a piece on why we get buggy software that is worth reading, I think.

The manufacturing of physical objects is largely understood, mostly because the same thing is being produced over and over. If you are cranking out widgets, the process is orderly, repeatable, and possibly even perfectable. You can find and improve the worst bottleneck again and again until the whole process is highly efficient.

Building bridges and houses is less repeatable, but in both cases the process has been developed over many decades, even centuries. There are few surprises.

But building software? That is a whole different cup of noodles. Each project, large and small, is unique and has never been done before. There are any number of ways to solve a given problem and smart people disagree on what is the best way. Maybe there isn’t even a single best way. Often, a better way only reveals itself after you’ve done the project the first time. Once you choose a path, there will always be an unknown number of unknown unknowns. (See what I did there?)

Sure, we have developed some best practices over the years and some patterns have emerged, but developers will even differ over how those patterns might be implemented.

In other words, there is generally a pretty clear best way to build widgets, but there are a seemingly infinite number of ways to build a software project, making the notion of a “best way” problematic.

Or, as so many people have put it before: programming is an art. Not engineering, but an art.

Hodges then explains that the process of writing software truly is a "two-legged three-legged stool":

So what is a development team to do? There are dials that can be turned. The first dial everyone turns is to work more hours. That might work. However, pressuring a team usually results in more shortcuts and more errors, increasing technical debt but not necessarily producing an on-time delivery. Managers are tempted to add more people to increase the hours worked, but then Brooks Law kicks in and things get worse.

Altering the delivered feature set is another common solution. Another path is sacrificing quality, whether via reducing the bug fixing or decreasing the effectiveness of the user interface. Finally, one can move the schedule, but that irritates paying customers.

Hence we have the famous three-legged stool of “quality, schedule, or features — pick any two!” The business hates to sacrifice schedule, and sometimes will flat refuse to do it. Customers, whose feelings are dreadfully important, don’t like shifting dates and generally abhor scaling back features.

That leaves quality on the chopping block, and let’s be honest, this is usually the path taken. No one likes a glitchy application, but it is both easy and tempting to hide poor quality behind apparently working features. It is all too routine for customers to take delivery of “working software” and only discover the lack of quality after the fact. The software team then fixes the bugs and delivers a “point release” that improves quality. This is, in effect, a very sly way to alter the schedule — put the bug-fixing time after the delivery time.

And that “clever” way of solving the problem is normally the one chosen. The development team gets to (supposedly) hit the date, the features promised to the customer are delivered, and everyone is happy. Until they run the software.

Indeed. We may dislike that state of things. But I'm convinced Hodges is describing reality pretty well. In other words, we don't end up with buggy software because someone is doing something wrong. We don't end up with buggy software because the programmers are bad or are using the wrong methodology. We end up there pretty much by design, I'm afraid. {link to this entry}

On the true cost of the cloud... and why there are no silver bullets
[Mon Oct 21 11:56:46 CDT 2024]

Today, we read on The Register that 37signals managed to save $2 million from going partially cloud-free in 2024:

As it took until the end of the year for various contract commitments to expire, 2024 has been the first clean year of savings, and according to Hansson, "we've been pleasantly surprised that they've been even better than originally estimated."

In fact, the cloud bill for 37signals now stands at about $1.3 million, a reduction of almost $2 million per year, and the savings are likely to be more than the company's original estimate of $7 million over five years as it managed to fit the new hardware into its existing datacenter racks and power restrictions.

(...)

"We've been out for just over a year now, and the team managing everything is still the same. There were no hidden dragons of additional workloads associated with the exit that required us to balloon the team, as some spectators speculated when we announced it."

I suppose I'm old enough to know that there are no silver bullets. None. In any field. Cloud computing is a tool. Just that. And, as any tool, it may or may not make sense to use depending on one's goals and context. Also, depending on the circumstances, it may make sense to adopt it only partially. Incidentally, I don't expect AI to be any different. Don't fall for the hype. In life, things are rarely all black or white. {link to this entry}

The US and Europe took divergent approaches to Theoretical Computer Science?
[Sat Oct 19 12:17:45 CDT 2024]

Here is another interesting tidbit of information I recently came across of. Apparently, the US and Europe may have taken a divergent approach to the field of Theoretical Computer Science (TCS) starting in the 1980s, according to Moshe Y. Vardi:

Wikipedia defines Theoretical Computer Science (TCS) as the "division or subset of general computer science and mathematics that focuses on more abstract or mathematical aspects of computing." This description of TCS seems to be rather straightforward, and it is not clear why there should be geographical variations in its interpretation. Yet in 1992, when Yuri Gurevich had the opportunity to spend a few months visiting a number of European centers, he wrote in his report, titled "Logic Activities in Europe," that "It is amazing, however, how different computer science is, especially theoretical computer science, in Europe and the U.S." (Gurevich was preceded by E.W. Dijkstra, who wrote an EWD Note 611 "On the fact that the Atlantic Ocean has two sides.")

This difference between TCS in the U.S. (more generally, North America) and Europe is often described by insiders as "Volume A" vs. "Volume B," referring to the Handbook of Theoretical Computer Science, published in 1990, with Jan van Leeuwen as editor. The handbook consisted of Volume A, focusing on algorithms and complexity, and Volume B, focusing on formal models and semantics. In other words, Volume A is the theory of algorithms, while Volume B is the theory of software. North American TCS tends to be quite heavily Volume A, while European TCS tends to encompass both Volume A and Volume B. Gurevich’s report was focused on activities of the Volume-B type, which is sometimes referred to as "Eurotheory."

Apparently, according to the author, this difference didn't exist before the 1980s. He also adds that: "It is astonishing to realize the term 'Eurotheory' is used somewhat derogatorily, implying a narrow and esoteric focus for European TCS." If I had to venture a guess, I'd say it may be due to the strong opposition in certain sectors of American campuses (those were the "hard sciences" are taught) to the trendy postmodern approach that took hold across the Atlantic, and even in the faculties of humanities here in the US. In that sense, it may be another example of the proverbial throwing out the baby with the bathwater. {link to this entry}

Funding open source projects
[Sat Oct 19 12:08:08 CDT 2024]

InfoWorld published a few days ago an opinion piece signed by Bill Doerrfeld titled How do we fund open source?. The author lays out a few options, all of them with their own pros and cons: direct monetization, corporate support, code contributions, intermediary companies, open source foundations, and public aid. The piece was quite timely because, the day before I read it, I happened to mention to my oldest son how, in a very accidental manner, I came to realize that I do contribute to a couple of open source projects in an indirect way. As it turned out, both Joplin and Logseq offer a sycnhronization server for a very small monthly feed that I use. As far as I know, both projects also have released their server-side code as open source, thus making it possible for anyone to self-host. However, it's more convenient to pay the small fee to them and, along the way, provide a modest contribution to the project. {link to this entry}

On passwords and passkeys
[Sat Oct 19 11:57:59 CDT 2024]

About ten days ago, David Heinemeier Hansson, creator of Basecamp, wrote a blog entry explaining how passwords have problems but, in his view, passkeys have more:

The problem with passkeys is that they're essentially a halfway house to a password manager, but tied to a specific platform in ways that aren't obvious to a user at all, and liable to easily leave them unable to access of their accounts. Much the same way that two-factor authentication can do, but worse, since you're not even aware of it.

Let's take a simple example. You have an iPhone and a Windows computer. Chrome on Windows stores your passkeys in Windows Hello, so if you sign up for a service on Windows, and you then want to access it on iPhone, you're going to be stuck (unless you're so forward thinking as to add a second passkey, somehow, from the iPhone will on the Windows computer!). The passkey lives on the wrong device, if you're away from the computer and want to login, and it's not at all obvious to most users how they might fix that.

Even in the best case scenario, where you're using an iPhone and a Mac that are synced with Keychain Access via iCloud, you're still going to be stuck, if you need to access a service on a friend's computer in a pinch. Or if you're not using Keychain Access at all. There are plenty of pitfalls all over the flow. And the solutions, like scanning a QR code with a separate device, are cumbersome and alien to most users.

If you're going to teach someone how to deal with all of this, and all the potential pitfalls that might lock them out of your service, you almost might as well teach them how to use a cross-platform password manager like 1password.

Perhaps the idea is good, but the implementation not so much? I get the feeling that, over and over again, good ideas are implemented in a self-centered mannered, so to speak. What I mean is that, although the idea might be great if implemented as an open standard following the trail of the old Internet standards, it is often applied in a self-centered and self-referential context where different vendors act as if their own particular technical solutions are the only thing that exist. I assume they do this, at least in part, out of business interest. But I don't rule out that they may do so also because it's too easy to become too concentrated in one's own world and lose sight of everything else. One way or another, it seems clear that we are better off using platform-independent tools. That's precisely what I advice my friends and relatives to do. However, it's often easier to stick to whichever tools the vendor who makes their smartphone, tablet or netbook provides without thinking twice about it. It's just more convenient. {link to this entry}

Mix of links on AI: Apple, AI "reasoning" capabilities, and open source
[Sat Oct 19 11:38:04 CDT 2024]

A diverse mix of news articles on AI. ArsTechnica tells us that an Apple study exposes deep cracks in LLM's "reasoning capabilities":

For a while now, companies like OpenAI and Google have been touting advanced "reasoning" capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical "reasoning" displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. "Current LLMs are not capable of genuine logical reasoning," the researchers hypothesize based on these results. "Instead, they attempt to replicate the reasoning steps observed in their training data."

Dare I say this is what most people who were educated in the Humanities, such as myself, have been saying for a while now? Dare I also say that, obviously, this goes against all the hype surrounding this new technology that we are exposed to on a daily basis? Mind you, AI should still be quite useful for certain purposes. As a tool for certain things. But it's not the solution to all our problems, as some people appear to believe. Yes, it's an exciting technology, and it immediately brings up to mind pictures taken from science-fiction novels and movies that are indeed very futuristic. The excitement is nice. But let's make an effort to counter that with some levels of realism.

Elsewhere, we also read that the Open Source Initiative (OSI) calls out Meta for its misleading "open source" AI models:

You see, even though Meta advertises Llama as an open source AI model, they only provide the weights for it—the things that help models learn patterns and make accurate predictions.

As for the other aspects, like the dataset, the code, and the training process, they are kept under wraps. Many in the AI community have started calling such models 'open weight' instead of open source, as it more accurately reflects the level of openness.

Plus, the license Llama is provided under does not adhere to the open source definition set out by the OSI, as it restricts the software's use to a great extent.

It seems clear that Meta constantly referring to their LLM as "open source" is only a tactic to try and gain speed against their main competitors, since they are definitely lagging behind.

Apropos the topic of AI and open source, InfoWorld published an opinion piece signed by Matt Asay defending the view that "open source isn't going to save AI":

As Richard Waters notes in the Financial Times, “OpenAI’s biggest challenge [is] the lack of deep moats around its business and the intense competition it faces.” That competition isn’t coming from open source. It’s coming from other well-capitalized businesses—from Microsoft, Meta, and Google. One of the biggest issues in AI right now is how much heavy lifting is imposed on the user. Users don’t want or need a bunch of new, open source–enabled options. Rather, they need someone to make AI simpler. Who will deliver that simplicity is still up for discussion, but the answer isn’t going to be “lots of open source vendors,” because, by definition, that would simply exacerbate the complexity that customers want removed.

Yes, we should be grateful for open source and its impact on AI, just as we should for its impact on cloud and other technology advances. But open source isn’t going to democratize AI any more than it has any other market. The big thing that customers ultimately care about, and are willing to pay for, is convenience and simplicity. I still believe what I wrote back in 2009: “No one cares about Google because it’s running PHP or Java or whatever. No one cares about the underlying software at all; at least, its users don’t.”

Overall, I'd say Asay is correct. Because some of us do care whether the software we use is open source or not, we shouldn't fool ourselves into thinking that significant chunks of the population out there also cares. Whether I like it or not, I'm afraid that's not the case. However, said that, as someone who is more technically skilled than the average person, I do prefer to have the choice to run open source software. I do like to have that option, at least. Thus, if possible at all, I'd also like to see it in the field of AI. {link to this entry}

Chromium's influence on browser alternatives
[Sat Oct 19 11:27:50 CDT 2024]

Came across a blog entry from Rohan "Seirdy" Kumar on Chromium's influence on Chromium alternatives that brings up an interesting point to the front:

I don’t think most people realize how Firefox and Safari depend on Google for more than “just” revenue from default search engine deals and prototyping new web platform features.

Off the top of my head, Safari and Firefox use the following Chromium libraries: libwebrtc, libbrotli, libvpx, libwebp, some color management libraries, libjxl (Chromium may eventually contribute a Rust JPEG-XL implementation to Firefox; it’s a hard image format to implement!), much of Safari’s cryptography (from BoringSSL), Firefox’s 2D renderer (Skia)...the list goes on. Much of Firefox’s security overhaul in recent years (process isolation, site isolation, user namespace sandboxes, effort on building with ControlFlowIntegrity) is directly inspired by Chromium’s architecture.

Although I prefer to run Firefox precisely because I think it's important to keep browser alternatives alive, this is a very good point. Like it or not, we are not an island onto ourselves. In particular, the level of interdependence these days is beyond imagination. As the author points out at the end of his blog entry:

For completeness: Firefox and Safari’s influence on Chromium in recent years includes the addition of memory-safe languages, partitioned site storage, declarative content blocking (from Safari), and a vague multi-year repeatedly-delayed intent to phase out third-party cookies. Chromium would be no better off without other browser projects.

{link to this entry}

Preventing search engines from linking to your Facebook profile
[Sat Oct 19 11:14:27 CDT 2024]

Although it's been years since I've actively used Facebook, my account is still activated and, from time to time, I do check it. Rarely. But I do check it. Like it or not, it's my only window to certain friends, both in the US and Spain. Also, it's the only way certain friends know how to contact me for a quick message. So, in spite of my own personal preferences, the account is still there. Recently, it occurred to me that I might as well search for myself online, and see how much exposure to the world I have. Basically, it looks as if there are quite a few other people who share first name and two last names with me. That helps confuse matters a bit and provides some level of privacy (well, to some extent, at least for people who are casually searching around which, in the case of someone who is not a public personality like me, might be enough). There are two exceptions to that: first, my Facebook account; and, second, and old Twitter account I opened while I was a representative in my local council back when I lived in Spain. I deactivated the Twitter account sometime ago, so there is nothing there to see. The Facebok account is a different story. So, I searched for a way to avoid this, and came across an article published by Facebook that explains what to do if you don't want search engines to link to your Facebook profile. Alas, it wasn't much help in my case because, although the instructions are correct, I already had enabled this privacy feature. Notice, though, that the privacy protection is limited only to your personal profile on Facebook. As they clearly state, "information from your profile and some things you share (such as public information) can still appear in search engine results even if you select No". In other words, that this particular setting is not a great help. Facebook has other pages where they basically explain the same. In other words, as we often hear, once we open this particular Pandora's box, there is no way to close it again, it seems. The problem is that, unlike blogging and other forms of online presence, Facebook (and other social media) makes it way easier to share very personal tidbits, thus also making it easier for strangers to go around collecting those "crumbles" to build a whole, meaningful picture. {link to this entry}

Can you trust cryptocurrencies?
[Sun Oct 13 10:41:35 CDT 2024]

I must say that from the very beginning, I've been very skeptical about the cryptocurrency fad. In general, if I can understand that the Government cannot always be trusted, I believe private corporations can be trusted even less. Let's face it, they are guided by their own economic interests. This is even officially so. There isn't even a pretense of supporting the public interest. There is no democratic control either. And then, of course, there is the libertarian utopia that I never believed in, especially when it comes to its most individualistic variant, which is the one that predominates here in the USA. In any case, all this is apropos a news article published by The Register on how the FBI created a cryptocurrency with the specific purpose of catching scammers and fraudsters. It's the Wild West in cryptoland. It's what happens when you eliminate any overseeing authority and do away with most rules in the name of individual interest. I don't find it surprising, to be honest. Is it OK to use one of the most well known cryptocurrencies here or there? I suppose. Personally, I'm not interested. But I suppose. One way or another, though, they are not here to set us free and build utopia on earth. {link to this entry}

A bit of UNIX history apropos macOS
[Sun Oct 13 10:20:48 CDT 2024]

While reading reading an article from The Register on how Apple macOS 15 Sequoia is officially UNIX, I came across a couple of little tidbits of UNIX history. First of all, the kernel in modern macOS is the XNU kernel, which, ironically enough, stands for "X is Not Unix". Originally developed by NeXT for their NeXTSTEP operating system, it is a hybrid kernel based on the older Mach kernel developed at Carnegie Mellon University. Although for a long time I thought that it was a microkernel, apparently it's more of a hybrid kernel then. A second interesting tidbit of information was that Richard Stallman was also involved in the development of the POSIX standard. As he wrote on his own personal website back in 2011 about the origin of the name POSIX:

In the 1980s I was in the IEEE committee that wrote the standard that ultimately became known as POSIX. The committee set itself the task of standardizing interface specs for a Unix-like system, but had no short name for its work. When the first part of the specification was ready, someone gave it the name "IEEEIX", with a subtitle that included "Portable Operating System" — perhaps "Specifications for a Portable Operating System".

It seemed to me that nobody would ever say "IEEEIX", since the pronunciation would sound like a shriek of terror; rather, everyone would call it "Unix". That would have boosted AT&T, the GNU Project's rival, an outcome I did not want. So I looked for another name, but nothing natural suggested itself to me.

So I put the initials of "Portable Operating System" together with the same suffix "ix", and came up with "POSIX". It sounded good and I saw no reason not to use it, so I suggested it. Although it was just barely in time, the committee adopted it.

I think the administrators of the committee were as relieved as I was to give the standard a pronounceable name.

Again, nothing very important. Just a couple of tidbits of UNIX history. That's all. {link to this entry}

Electron vs. Tauri
[Sun Oct 13 09:57:28 CDT 2024]

This week, I ran into an article from InfoWorld comparing Electron to Tauri. Both are cross-platform frameworks for the development of software applications. I must say I had never heard of Tauri, but it sounds quite good. From what they say, it addresses one of Electron's major drawbacks, i.e. its memory consumption. From time to time, I install an app on my desktop, realize that it uses a significant amount of system resources, check to discover it's an Electron app, and immediately try to figure out the way to migrate away. Quite often, I end up running the web service directly on my browser instead. Speaking of which, there is also progressive web apps, of course. The problem there is that my favorite browser is Firefox, which for some reason stopped supporting these sometime ago. I know there is a Progressive Web Apps for Firefox add-on, but it requires installing an additional piece of software on the desktop and it makes a mess of the XDG directories. So, when I have to run a web service as an app of sorts, I very much prefer to set up its own Firefox profile and write my own desktop file with its own icon. That appears to work fine and, believe it or not, uses less resources than Electron. In any case, the fact that Tauri doesn't use so many system resources (both when it comes to disk space and memory) and it's written in Rust does attract me. The problem, of course, is that there aren't so many apps that use that framework at the moment. Hopefully, it will become more popular. {link to this entry}

Is "Juice jacking" truly a thing?
[Sun Oct 13 09:49:47 CDT 2024]

I've come across a few videos warning about Juice jacking lately, so I decided to investigate if there is truly anything to it. It appears to be a way to compromise mobile devices (e.g., smartphones, or tablets) through the USB cable that those devices use for charging and data transfer. Apparently, a properly manipulated USB plug could then be used to install malware or copy data. Needless to say, if this is possible, then all USB chargers in public spaces (cafes, airports...) automatically become suspect. So, is it real? Well, it sounds to me as if the jury is still out. It does appear to be real in the resense of being possible. However, all I can find from credible sources is information about how this is more like a "theoretical compromise" done for "research purposes" that has not been spotted "in the wild". So, I'd call it a relative threat. It is certainly possible, but one is unlikely to run into it. In any case, it also sounds as if there are existing mitigations both in the form of software updates and so-called USB data blockers. {link to this entry}

Smart TVs as Trojan horses
[Tue Oct 8 11:48:52 CDT 2024]

Ars Technica shares about a report from the Center for Digital Democracy (CDD) providing the details on how the so-called "smart TVs" behave like true Trojan horses in our homes:

The companies behind the streaming industry, including smart TV and streaming stick manufacturers and streaming service providers, have developed a "surveillance system" that has "long undermined privacy and consumer protection," according to a report from the Center for Digital Democracy (CDD) published today and sent to the Federal Trade Commission (FTC). Unprecedented tracking techniques aimed at pleasing advertisers have resulted in connected TVs (CTVs) being a "privacy nightmare," according to Jeffrey Chester, report co-author and CDD executive director, resulting in calls for stronger regulation.

(...)

"Not only does CTV operate in ways that are unfair to consumers, it is also putting them and their families at risk as it gathers and uses sensitive data about health, children, race, and political interests,” Chester said in a statement.

Worse yet, the latest development in Generative AI make things even more worrisome:

CDD's report highlights the CTV industry's interest in using generative AI to bolster its targeted advertising capabilities. Approaches currently being explored could alter what one viewer sees when streaming a show or movie compared to another viewer.

For example, Amazon Web Services and ad-tech company TripleLift are working with proprietary models and machine learning for dynamic product placement in streamed TV shows. The report, citing a 2021 AWS case study, says that "new scenes featuring product exposure can be inserted in real-time 'without interrupting the viewing experience.'"

(...)

Similarly, the report's authors describe concerns that the CTV industry's extensive data collection and tracking could potentially have a political impact. It asserts that political candidates could use such data to run "covert personalized campaigns" leveraging information on things like political orientations and "emotional states".

Not a very promising picture for anyone who cares about privacy. The thing, though, is that it has become almost impossible to avoid these devices when shopping for a new television set. In that sense, it's no different than the problems consumers may experience if they want to buy a car without chips. Increasingly, the choice is not even there. So much for free market! {link to this entry}

Chris Siebenmann on init not being a service manager as such
[Mon Oct 7 14:22:57 CDT 2024]

I find Chris Siebenmann's Wandering Thoughts blog a very useful (and thoughtful) source of ideas for anything related to UNIX and system administration in general. Today, I came across a post from a few days ago explaining that init on UNIX was not a service manager as such. His explanation makes perfect sense. Neither the BSD nor the System V tradition truly had a version of init (i.e., PID 1) that supervised or managed the daemons. As a matter of fact, as Siebenmann explains the daemon processes weren't even directly launched by init, but rather by their respective init shell scripts. Also, when one needed to restart or kill the process, init was out of the picture. All this changed quite recently in Linux with the introduction of systemd, among others. But, for better or for worse, it wasn't there before. {link to this entry}

Musings on semantic versioning
[Mon Oct 7 14:18:00 CDT 2024]

Browsing around, I came across a blog post on version numbers that clarifies pretty well how semantic versioning works:

So for me, the MAJOR.MINOR.PATCH of semantic versioning breaks down like this:

MAJOR
Some change in the code base was made; either a change in API behavior, removal of some portion of the API, file format, or otherwise any visible change (except for bug fixes) in how the code works. Work will probably be required to update to this version of the library/module/class.

MINOR
Only additions to the API were made, in a backward compatible way. No work is expected, unless you were already using a name used in the updated library.

PATCH
A bug fix. The intent is for no work required to upgrade, unless you were relying upon the buggy behavior, or used a workaround that might now interfere with the library.

{link to this entry}

College students use Meta's smart glasses to identify people
[Fri Oct 4 11:28:51 CDT 2024]

One more for the scary uses of current AI technology (I was going to say "potential", but this is already here, there is nothing "potential" about it) has been reported by The Verge: college students use Meta's new smart glasses to identify people in real time.

Two Harvard students have created an eerie demo of how smart glasses can use facial recognition tech to instantly dox people’s identities, phone numbers, and addresses. The most unsettling part is the demo uses current, widely available technology like the Ray-Ban Meta smart glasses and public databases.

In the demo, you can see Nguyen and Caine Ardayfio, the other student behind the project, use the glasses to identify several classmates, their addresses, and names of relatives in real time. Perhaps more chilling, Nguyen and Ardayfio are also shown chatting up complete strangers on public transit, pretending as if they know them based on information gleaned from the tech.

The implications for people's privacy are obvious. Anyone could be out there using an application like this to identify people on the streets. It is a brave new world out there without a doubt. {link to this entry}

How GenAI will impact jobs
[Fri Oct 4 09:18:22 CDT 2024]

HPC Wire tells us about a study on how GenAI will impact jobs in the real world:

Clearly, no jobs that require hands-on execution or the application of physical force, such as bus driver or emergency room nurse, are going to be replaced by GenAI, which is just software at the end of the day (self-driving buses and robot-assisted surgery are real, but they also require a lot more tech than just GenAI). Considering that more than half of jobs involved in this report required some type of physical execution, the prospects of full GenAI replacement look pretty bleak.

But that’s not to say there will be no benefit. Indeed says that, even for jobs like bus driver or nurse, GenAI could help with repetitive tasks, such as documentation, which will “allow workers to refocus on the core skills necessary in these roles,” Hering and Rojas write.

The researchers concluded that about 29% of jobs could “potentially” be replaced by GenAI “as it continues to improve and if certain changes to workplaces and/or working norms occur going forward,” the researchers write. The jobs that GenAI will have the biggest impact are “more stereotypical office jobs,” the researchers write.

Across the three measures at the heart of the study–theoretical knowledge; problem solving; and physical job skills–GenAI excels the most with theoretical knowledge, followed closely by problem solving. In fact, theoretical knowledge was the only attribute that GenAI gave itself a 5, the top score, thanks to the extensive training of LLMs on large amounts of information on the Web, and the capability to use search engines.

GPT-4o also scored decently on problem-solving. It rated itself a 3 for 70% of the skills it assessed, and for 28% of those tasks, it said it was “possible” that it could replace a human. It also received several 4s, and rated itself “likely” that could replace a human for 3% of the tasks.

It all makes perfect sense, I think. If anything, I'd add that, in my own experience, AI is not truly there yet. Hmmm, it *may* be able to help with theoretical knowledge and summarizing things. It *may* also help making suggestions. But little else. To say that AI can help in problem solving, as the study states, is, I think, a stretch. For the time being, all it does, again in my own experience, is to provide some general and vague information on how to start trouble shooting a problem. That's all. {link to this entry}

Amazon to introduce even more ads
[Fri Oct 4 09:12:21 CDT 2024]

According to the Financial Times, Amazon is getting ready to increase the number of ads on Prime Video. Let's just remember that, not so long ago, the absence of ads was precisely one of the reasons why we liked streaming. In any case, to me, what is truly astonishing is the following:

The company said it had not seen a sharp drop in subscribers since it introduced advertising to its Prime Video platform eight months ago, allaying fears among top executives of a customer backlash, as it attempts to win over more brands to its streaming service.

If it sounds as if they are trying to get away with as much as they can is because they are. And yes, I know some people will immediately argue that we, consumers, have a choice. But do we? After all, Prime Video bundles other services as part of the Amazon Prime membership that no other vendor offers. {link to this entry}

New Servo rendering engine
[Fri Oct 4 09:05:37 CDT 2024]

Checking the tech news today, I came across an article on Phoronix providing an update on the latest features added to the Servo Browser. I must say I don't remember having heard of the Servo rendering engine until now, but it sounds like an interesting project:

Servo is an experimental browser engine designed to take advantage of the memory safety properties and concurrency features of the Rust programming language. It seeks to create a highly parallel environment, in which rendering, layout, HTML parsing, image decoding, and other engine components are handled by fine-grained, isolated tasks. It also makes use of GPU acceleration to render web pages quickly and smoothly.

According to the Wikipedia entry linked above, it's only a research project. It was started by the Mozilla Corporation, and later transferred to the Linux Foundation. I can see on their download page that they have binaries available for Linux and Android. I'll see if I can give it a try. {link to this entry}

Reddit or the dangers of trusting a corporation with a public forum
[Tue Oct 1 11:08:58 CDT 2024]

I'm convinced that, by now, we are all noticing how tricky it is to trust corporations with public forums. As I've said before, plenty of people these days refer to platforms such as Twitter, Facebook or Reddit as our "digital public square". As a matter of fact, the corporations behind those platforms promote that view and tend to refer to their services in similar terms. The problem I see with that is that, far from being truly public, these are actually very private. Not only that but, since they are maintained by private businesses, they also have an obvious economic incentive to make a profit. None of that is necessarily compatible with a true civic conversation in the "public square". It's almost as if our public space had been pivatized and, all of a sudden, a bunch of tycoons get to decide what is acceptable or not in our public discourse. It may not be "big government", but I don't see the advantage, to be honest. Today, for instance, we read that Reddit has announced a controversial policy change to prevent future protests from moderators:

Following site-wide user protests last year that featured moderators turning thousands of subreddits private or not-safe-for-work (NSFW), Reddit announced that mods now need its permission to make those changes.

Reddit's VP of community, going by Go_JasonWaterfalls, made the announcement about what Reddit calls Community Types today. Reddit's permission is also required to make subreddits restricted or to go from NSFW to safe-for-work (SFW). Reddit's employee claimed that requests will be responded to “in under 24 hours.”

(...)

Last year, Reddit announced that it would be charging a massive amount for access to its previously free API. This caused many popular third-party Reddit apps to close down. Reddit users then protested by turning subreddits private (or read-only) or by only showing NSFW content or jokes and memes. Reddit then responded by removing some moderators; eventually, the protests subsided.

Reddit, which previously admitted that another similar protest could hurt it financially, has maintained that moderators' actions during the protests broke its rules. Now, it has solidified a way to prevent something like last year's site-wide protests from happening again.

The reality is that only a true public platform could guarantee some level of neutrality or, at the very least, democratic control. When it comes to this topic, I'd very much prefer public forums set up by the Government, to be honest. Better yet, I'd like to see the fediverse succeed. The problem is that, in order for it to succeed, the majority of users have to abandon the old platforms and switch to these other services en masse, which is quite unlikely. {link to this entry}