Spotify repo missing public signature
[Sat Dec 28 09:15:57 CST 2024]

Today, when I ran the usual apt command to install the latest updates on my laptop running Debian stable, it showed the following error when contacting the Sptofy repo:

GPG error: http://repository.spotify.com stable InRelease: The following signatures couldn't be verified because the public key \
is not available: NO_PUBKEY C85668DF69375001
The repository 'http://repository.spotify.com stable InRelease' is not signed.
Searching around turned up a fix that didn't solve the problem, but showed me how to solve it. Basically, all I had to do was download the relevant key from the Spotify repo, and add it to my own apt configuration:
$ curl -sS https://download.spotify.com/debian/pubkey_C85668DF69375001.gpg | sudo gpg --dearmor --yes \
-o /etc/apt/trusted.gpg.d/spotify-2024-12-28-C85668DF69375001.gpg
That still doesn't change the fact that it is quite annoying running into this issue periodically, and also that Spotify doesn't appear to announce it or publish any official documentation on how to address it. {link to this entry}

Notes on A Philosophy of Software Design, by John Ousterhout
[Thu Dec 26 08:42:53 CST 2024]

Came across a short summary of main ideas taken from the book A Philosophy of Software Design, by John Ousterhout. The notes concetrate on three ideas: zero-tolerance towards complexity, smaller components are not necessarily better for modularity, and exception handling accounts for a lot of complexity. Together with a short description, the blogger also offers a few code examples, together with suggested ways to fix the issues. Overall, I'd say it's a good piece. {link to this entry}

A sensible approach to AI-assisted coding
[Thu Dec 12 13:45:13 CST 2024]

There is so much hype about AI these days that, for the mosrt part, we only read/hear statements about how it's the miracle solution to all our problems or, on the other extreme, the root of all evils that's come to wipe out humanity from the face of the earth. Unfortunately, it's difficult to come by sensible, well-reasoned positions located somewhere in the middle, which is where I think the right position tends to be more often than not. One of these, I think, is Addy Osmani's take on what he calls "the 70% problem":

A tweet that recently caught my eye perfectly captures what I've been observing in the field: Non-engineers using AI for coding find themselves hitting a frustrating wall. They can get 70% of the way there surprisingly quickly, but that final 30% becomes an exercise in diminishing returns.

This "70% problem" reveals something crucial about the current state of AI-assisted development. The initial progress feels magical - you can describe what you want, and AI tools like v0 or Bolt will generate a working prototype that looks impressive. But then reality sets in.

This is due, Osmani argues, to the following learning paradox:

There's a deeper issue here: The very thing that makes AI coding tools accessible to non-engineers - their ability to handle complexity on your behalf - can actually impede learning. When code just "appears" without you understanding the underlying principles:

  • You don't develop debugging skills
  • You miss learning fundamental patterns
  • You can't reason about architectural decisions
  • You struggle to maintain and evolve the code
This creates a dependency where you need to keep going back to AI to fix issues, rather than developing the expertise to handle them yourself.

As a consequence, when it comes to AI-assisted coding, Osmani recommends to use these tools only in the following scenarios: prototyping accelerators for experienced developers, as learning aids for those committed to understanding development, or as MVP generators for validating ideas quickly. Overall, his approach makes far more sense to me than the vast majority of ideas I hear about these days. {link to this entry}

Mozilla's rebrand
[Sun Dec 8 09:51:23 CST 2024]

According to their own blog, Mozilla has launched a new rebrand campaign "for the next era of tech". As they explain:

Mozilla isn’t just another tech company — we’re a global crew of activists, technologists and builders, all working to keep the internet free, open and accessible. For over 25 years, we’ve championed the idea that the web should be for everyone, no matter who you are or where you’re from. Now, with a brand refresh, we’re looking ahead to the next 25 years (and beyond), building on our work and developing new tools to give more people the control to shape their online experiences.

I've been using their products for as long as I can remember. The old Netscape Navigator was my first browser, at least from the moment when I started accessing the Internet from home here in the US back in the mid-1990s. Prior to that, during my years at the University of Limerick, I spent a good amount of time in the computer lab poking around. That was back in the early 1990s. That's truly when I used the Internet for the first time. But I don't remember what browser I used back then. Chances are it was NCSA Mosaic. In any case, I suppose what I mean is that I use Firefox for technical reasons (it's a good, solid browser, with a great feature set and privacy-oriented by default, at least when compared to the other major browsers), personal reasons (a personal attachment developed as a consequence of being involved in supporting Netscape products in the mid- to late-1990s, as well as looking forward to the release of their source code in 1998), and political reasons (I don't want to see a browser developed by a single corporate entity be the only choice). Yet, aside from the fact that I honestly don't like the new logo (but hey, that is an issue of personal preference, right?), I find it difficult to believe that this should be Mozilla's top priority at a moment when their very survival is at risk. As other people pointed out in the Slashdot thread on the topic:

When you're caught up in branding and explaining the fabulousness of said branding to your audience, you're by definition on the wrong track.

Or, perhaps more poignant:

Nothing says "underground group of activists" like a new branding from a global powerhouse design firm.

Yes, I understand the temptation to think that, since the product is actually quite good, the only reason why it's lagging behind in the market is due to lack of recognition. However, I don't think that a mere rebranding campaign will solve that problem. I'm afraid the issue is deeper than that. As it happened with Internet Explorer back in the 1990s, lots of people use Google Chrome (or Safari) because it's the default on their phones and tablet. To them, it's just "the browser". Worse yet, in many cases, it's "the Internet". Let's not fool ourselves. After so many years, that is still the overall level of computer literacy out there. Yes, even among the younger generations. That being the case, I'm not sure how that can be changed. There was a time in the 2000s when Firefox rose to fame and Internet Explorer lost support. I'm still not sure how (or why) that happened. But that's what the Mozilla Foundation should be studying. A rebranding campaign, I think, is just the easy, lazy response from mediocre executives. {link to this entry}

How close is AI to human-level intelligence?
[Fri Dec 6 15:41:51 CST 2024]

Nature (you can create a free account) published an excellent article reviewing how close AI is to human-level intelligence which, among other things, provides a very good introduction to AI technology itself:

During training, the most powerful LLMs — such as o1, Claude (built by Anthropic in San Francisco) and Google’s Gemini — rely on a method called next token prediction, in which a model is repeatedly fed samples of text that has been chopped up into chunks known as tokens. These tokens could be entire words or simply a set of characters. The last token in a sequence is hidden or ‘masked’ and the model is asked to predict it. The training algorithm then compares the prediction with the masked token and adjusts the model’s parameters to enable it to make a better prediction next time.

The process continues — typically using billions of fragments of language, scientific text and programming code — until the model can reliably predict the masked tokens. By this stage, the model parameters have captured the statistical structure of the training data, and the knowledge contained therein. The parameters are then fixed and the model uses them to predict new tokens when given fresh queries or ‘prompts’ that were not necessarily present in its training data, a process known as inference.

The use of a type of neural network architecture known as a transformer has taken LLMs significantly beyond previous achievements. The transformer allows a model to learn that some tokens have a particularly strong influence on others, even if they are widely separated in a sample of text. This permits LLMs to parse language in ways that seem to mimic how humans do it — for example, differentiating between the two meanings of the word ‘bank’ in this sentence: “When the river’s bank flooded, the water damaged the bank’s ATM, making it impossible to withdraw money.”

This has taken us so far, which is actually quite far. However, as the article explains, LLMs have limitations.

LLMs, says Chollet, irrespective of their size, are limited in their ability to solve problems that require recombining what they have learnt to tackle new tasks. “LLMs cannot truly adapt to novelty because they have no ability to basically take their knowledge and then do a fairly sophisticated recombination of that knowledge on the fly to adapt to new context.”

In other words, they lack the capacity to create, adapt and improvise, at least so far. But they have further limitations:

For a start, the data used to train the models are running out. Researchers at Epoch AI, an institute in San Francisco that studies trends in AI, estimate4 that the existing stock of publicly available textual data used for training might run out somewhere between 2026 and 2032. There are also signs that the gains being made by LLMs as they get bigger are not as great as they once were, although it’s not clear if this is related to there being less novelty in the data because so many have now been used, or something else. The latter would bode badly for LLMs.

Raia Hadsell, vice-president of research at Google DeepMind in London, raises another problem. The powerful transformer-based LLMs are trained to predict the next token, but this singular focus, she argues, is too limited to deliver AGI. Building models that instead generate solutions all at once or in large chunks could bring us closer to AGI, she says. The algorithms that could help to build such models are already at work in some existing, non-LLM systems, such as OpenAI’s DALL-E, which generates realistic, sometimes trippy, images in response to descriptions in natural language. But they lack LLMs’ broad suite of capabilities.

Or, to put it a different way, they lack the "big picture", at least, once again, so far.

None of this is to say that AI technology as it is right now is not truly impressive. Also, there is no doubt that it has a lot of potential. But, as it tends to happen, the field has been dominated by too muchype. It has certain uses. That's for sure. But we must be careful where we use it, must constantly double-check what it says, and, without a doubt, it's by no means anything close to human-level intelligence, at least not yet. So, what could bring that about? The article also suggests a possible path.

The intuition for what breakthroughs are needed to progress to AGI comes from neuroscientists. They argue that our intelligence is the result of the brain being able to build a ‘world model’, a representation of our surroundings. This can be used to imagine different courses of action and predict their consequences, and therefore to plan and reason. It can also be used to generalize skills that have been learnt in one domain to new tasks by simulating different scenarios.

Who knows? Perhaps we'll manage to build true Artificial General Intelligence (AGI) some day. Or perhaps not. One way or another, I'd say that AI is here to stay, it will indeed transform the way we do things, and it will be useful in certain areas. In that sense, it may be similar to the Internet and the Web. There was a lot of hype around those other technologies back in the 1990s and early 2000s. Lots of hype. Lots of exaggeration. Also, we failed to see the drawbacks. That's human nature, it seems. But, truly, to be clear, in the end, a couple of decades later, who doubts that they did transform the way we work, live, and organize our societies? I'd expect AI to follow a similar path. {link to this entry}

Social media, "moral outrage" and misinformation
[Wed Dec 4 20:03:06 CST 2024]

Thanks to ArsTechnica we learn about a study that found that people are more likely to share content that evokes moral outrage, even if it's false. As the article explains:

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

So, expressing moral outrage runs very deep in our minds. No surprise there, right? Also, we use it as an easy way to show commitment to a group. Again, no surprise there either, right? But it gets much worse.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Now, that's truly bad news! It's not a problem of distinguishing between true and false. Rather, it's social acceptance that we are after. If this was already bad enough in the old days when our voices could only be heard in our town or neighborhood, or perhaps our region or even nation if we were famous enough, the Internet and social media give us a global worldwide audience. Now, that's trouble! {link to this entry}