[ Main ] [ Home ] [ Work ] [ Code ] [ Rants ] [ Readings ] [ Links ] |
|
|
[2024] December November October September August July June May April March February January [2023] [2022] [2021] [2020] [2019] [2018] [2017] [2016] [2015] [2014] [2013] [2012] [2011] [2010] [2009] [2008] [2007] [2006] [2005] [2004] [2003] |
Spotify repo missing public signature
[Sat Dec 28 09:15:57 CST 2024]
Today, when I ran the usual apt command to install the latest updates on my laptop running Debian stable, it showed the following error when contacting the Sptofy repo: GPG error: http://repository.spotify.com stable InRelease: The following signatures couldn't be verified because the public key \ is not available: NO_PUBKEY C85668DF69375001 The repository 'http://repository.spotify.com stable InRelease' is not signed.Searching around turned up a fix that didn't solve the problem, but showed me how to solve it. Basically, all I had to do was download the relevant key from the Spotify repo, and add it to my own apt configuration: $ curl -sS https://download.spotify.com/debian/pubkey_C85668DF69375001.gpg | sudo gpg --dearmor --yes \ -o /etc/apt/trusted.gpg.d/spotify-2024-12-28-C85668DF69375001.gpgThat still doesn't change the fact that it is quite annoying running into this issue periodically, and also that Spotify doesn't appear to announce it or publish any official documentation on how to address it. {link to this entry} Notes on A Philosophy of Software Design, by John Ousterhout
[Thu Dec 26 08:42:53 CST 2024]
Came across a short summary of main ideas taken from the book A Philosophy of Software Design, by John Ousterhout. The notes concetrate on three ideas: zero-tolerance towards complexity, smaller components are not necessarily better for modularity, and exception handling accounts for a lot of complexity. Together with a short description, the blogger also offers a few code examples, together with suggested ways to fix the issues. Overall, I'd say it's a good piece. {link to this entry} A sensible approach to AI-assisted coding
[Thu Dec 12 13:45:13 CST 2024]
There is so much hype about AI these days that, for the mosrt part, we only read/hear statements about how it's the miracle solution to all our problems or, on the other extreme, the root of all evils that's come to wipe out humanity from the face of the earth. Unfortunately, it's difficult to come by sensible, well-reasoned positions located somewhere in the middle, which is where I think the right position tends to be more often than not. One of these, I think, is Addy Osmani's take on what he calls "the 70% problem": This is due, Osmani argues, to the following learning paradox: As a consequence, when it comes to AI-assisted coding, Osmani recommends to use these tools only in the following scenarios: prototyping accelerators for experienced developers, as learning aids for those committed to understanding development, or as MVP generators for validating ideas quickly. Overall, his approach makes far more sense to me than the vast majority of ideas I hear about these days. {link to this entry} Mozilla's rebrand
[Sun Dec 8 09:51:23 CST 2024]
According to their own blog, Mozilla has launched a new rebrand campaign "for the next era of tech". As they explain: Mozilla isn’t just another tech company — we’re a global crew of activists, technologists and builders, all working to keep the internet free, open and accessible. For over 25 years, we’ve championed the idea that the web should be for everyone, no matter who you are or where you’re from. Now, with a brand refresh, we’re looking ahead to the next 25 years (and beyond), building on our work and developing new tools to give more people the control to shape their online experiences.I've been using their products for as long as I can remember. The old Netscape Navigator was my first browser, at least from the moment when I started accessing the Internet from home here in the US back in the mid-1990s. Prior to that, during my years at the University of Limerick, I spent a good amount of time in the computer lab poking around. That was back in the early 1990s. That's truly when I used the Internet for the first time. But I don't remember what browser I used back then. Chances are it was NCSA Mosaic. In any case, I suppose what I mean is that I use Firefox for technical reasons (it's a good, solid browser, with a great feature set and privacy-oriented by default, at least when compared to the other major browsers), personal reasons (a personal attachment developed as a consequence of being involved in supporting Netscape products in the mid- to late-1990s, as well as looking forward to the release of their source code in 1998), and political reasons (I don't want to see a browser developed by a single corporate entity be the only choice). Yet, aside from the fact that I honestly don't like the new logo (but hey, that is an issue of personal preference, right?), I find it difficult to believe that this should be Mozilla's top priority at a moment when their very survival is at risk. As other people pointed out in the Slashdot thread on the topic: Or, perhaps more poignant: Yes, I understand the temptation to think that, since the product is actually quite good, the only reason why it's lagging behind in the market is due to lack of recognition. However, I don't think that a mere rebranding campaign will solve that problem. I'm afraid the issue is deeper than that. As it happened with Internet Explorer back in the 1990s, lots of people use Google Chrome (or Safari) because it's the default on their phones and tablet. To them, it's just "the browser". Worse yet, in many cases, it's "the Internet". Let's not fool ourselves. After so many years, that is still the overall level of computer literacy out there. Yes, even among the younger generations. That being the case, I'm not sure how that can be changed. There was a time in the 2000s when Firefox rose to fame and Internet Explorer lost support. I'm still not sure how (or why) that happened. But that's what the Mozilla Foundation should be studying. A rebranding campaign, I think, is just the easy, lazy response from mediocre executives. {link to this entry} How close is AI to human-level intelligence?
[Fri Dec 6 15:41:51 CST 2024]
Nature (you can create a free account) published an excellent article reviewing how close AI is to human-level intelligence which, among other things, provides a very good introduction to AI technology itself: This has taken us so far, which is actually quite far. However, as the article explains, LLMs have limitations. In other words, they lack the capacity to create, adapt and improvise, at least so far. But they have further limitations: Or, to put it a different way, they lack the "big picture", at least, once again, so far. None of this is to say that AI technology as it is right now is not truly impressive. Also, there is no doubt that it has a lot of potential. But, as it tends to happen, the field has been dominated by too muchype. It has certain uses. That's for sure. But we must be careful where we use it, must constantly double-check what it says, and, without a doubt, it's by no means anything close to human-level intelligence, at least not yet. So, what could bring that about? The article also suggests a possible path. Who knows? Perhaps we'll manage to build true Artificial General Intelligence (AGI) some day. Or perhaps not. One way or another, I'd say that AI is here to stay, it will indeed transform the way we do things, and it will be useful in certain areas. In that sense, it may be similar to the Internet and the Web. There was a lot of hype around those other technologies back in the 1990s and early 2000s. Lots of hype. Lots of exaggeration. Also, we failed to see the drawbacks. That's human nature, it seems. But, truly, to be clear, in the end, a couple of decades later, who doubts that they did transform the way we work, live, and organize our societies? I'd expect AI to follow a similar path. {link to this entry} Social media, "moral outrage" and misinformation
[Wed Dec 4 20:03:06 CST 2024]
Thanks to ArsTechnica we learn about a study that found that people are more likely to share content that evokes moral outrage, even if it's false. As the article explains: So, expressing moral outrage runs very deep in our minds. No surprise there, right? Also, we use it as an easy way to show commitment to a group. Again, no surprise there either, right? But it gets much worse. Now, that's truly bad news! It's not a problem of distinguishing between true and false. Rather, it's social acceptance that we are after. If this was already bad enough in the old days when our voices could only be heard in our town or neighborhood, or perhaps our region or even nation if we were famous enough, the Internet and social media give us a global worldwide audience. Now, that's trouble! {link to this entry} |