[ Main ] [ Home ] [ Work ] [ Code ] [ Rants ] [ Readings ] [ Links ] |
|
|
[2024] December November October September August July June May April March February January [2023] [2022] [2021] [2020] [2019] [2018] [2017] [2016] [2015] [2014] [2013] [2012] [2011] [2010] [2009] [2008] [2007] [2006] [2005] [2004] [2003] |
A Jamendo client for Linux, anyone?
[Mon Sep 30 14:39:53 CDT 2024]
Searching around for a way to listen to music released with an open license, I came across Jamendo once again. I say "once again" because, years ago, I used to listen to music from that platform while working away on my laptop. If I remember correctly, at the time I used the Rhythmbox player. However, as far as I can see, that player no longer supports Jamendo. It includes a plugin to enable access to Magnatune. But nothing for Jamendo. So, I decided to search around for a music player that supports Jamendo and, to my surprise, couldn't find anything. I read somewhere that Cantata does. However, it no longer appears to be maintained. Also, I installed it, and it didn't work for me at all. I had a similar issue with Clementine, which is actively maintained. But I found it difficult to use and, at any rate, I couldn't access any music from Jamendo after I installed it. So, back to Spotify it is, I'm afraid. {link to this entry} Mozilla and public AI
[Mon Sep 30 14:34:28 CDT 2024]
And, speaking of AI, it's nice to see that Mozilla is trying to promote open source and the public interest when it comes to AI too. As they state: While I still think there is a serious danger of applying the "new shiny tool" of AI to absolutely everything around us under the assumption that it's some sort of magic wand that will solve all our problems, there is no doubt in my mind that AI has potential and it will be very useful in certain areas. So, it's nice to see Mozilla promoting the public interest in this area. {link to this entry} Do AI coding assistants truly help?
[Mon Sep 30 14:26:35 CDT 2024]
Slashdot shared today that, apparently, it's not clear whether AI coding assistants truly help. Basically, a report from the firm Uplevel "found no significant improvements for developers" using Microsoft Copilot. On the contrary, using the tool introduced 41% more bugs. Also, developers' activity shifted more to reviewing code, but it didn't appear to significantly increase their productivity. My approach to this topic, as to anything else new, is that it will most likely help in some areas, but not in others. It won't be the panacea that so many people think it is. It will also introduce a bunch of issues and problems that we didn't foresee. In other words, it will be neither good nor bad, but a mixture of both, like everything else in life. One way or another, we'd better do our best to counterbalance the current hype surrouding everything related to AI. Otherwise, we risk a very serious burst of the bubble sooner rather than later. {link to this entry} The dangers of a privatized Internet
[Fri Sep 27 07:32:50 CDT 2024]
El Salto Diario published en interview with Marta G. Franco, the author of Las redes son nuestras, a book that reminds us that, not so long ago, the Internet was something other than a playpen where large companies appropriate as much personal information as they can, sell our most private data and try to manipulate us. That's pretty much what happened, indeed. The Internet was created by the public sector. I know it's difficult to believe after hearing the libertarian propaganda for decades now. To me, what's interesting is that the most creative part of it (i.e., the idea of connecting disparate centers of research in a common decentralized network, the TCP/IP protocol, a way to share documents from one's computer, email, real-time chat, bulletin boards and online forums, the world wide web...) was actually invented by the supposedly backwards and unimaginative public sector. Where the private sector shone was in finding the way to make money with it by spreading it to every single corner of the planet, making it the center of our lives and controlling it. Mind you, the author's depiction of the Internet in the early 2000s sounds a bit idyllic to me: I'd say by then, by the early 2000s, "Big Tech" was already on the rise and clearly appropriating the space. Yes, there were still remnants of resistance. Not only from the most militant groups (e.g., the hacklabs), but also from regular people who tried to use the new technology for their own purposes in their own daily lives. That's precisely when the "platforms" showed up as a way to privatize and commercialize that space (or, as most people would say these days, to "monetize" it). For, under capitalism, no spehere of our lives can go without commercialization. Franco's idea of becoming an "inhabitant" of the Internet, rather than a mere "user", sounds interesting: The goal is not impossible, but it definitely goes against the current. The way I see it, the Internet and the new technologies is one more area where we can fight against the big corporate interests in an effort to decolonize our lives. But it's just that. One more area. There are plenty other areas of our daily lives that are almost taken over by these commercial interests. Areas where we become mere "users", passive consumers of products and content. {link to this entry} OpenAI, business demands and sustainability
[Fri Sep 27 07:13:49 CDT 2024]
I'm not sure how many times we are going to go through this cycle, but here we are again. I suppose I'm old enough to be familir enough with the story. Not so long ago, a good amount of people were all excited about AI and, in particular, the fact that a company called OpenAI, a spinoff of a non-profit organization, was leading the charge. After all, we could trust a non-profit much better than "Big Tech", right Well, they just announced yesterday that OpenAI is to remove non-profit control and give Sam Altman equity. Oh, surprise. Nobody could see this one coming, right? It's not as if we haven't been here before. The problem, what we don't want to see, is that the issue is not personal ethics or personal character, but the rules of the game. This is not a matter of personal decision, but rather of systemic forces. In any case, yesterday we also read that OpenAI asked the US Government to approve energy-guzzling 5GW data centers: And there we have it. The path to sustainability. Ever heard of the Jevons paradox? That's another one we forget time and time again when we get too excited about a shiny new technology that is going to save us. Once again, we don't take the rules of the game into account. The systemic forces I was referring to above. No amount of "good character" can overcome those. {link to this entry} PagerDuty, Android, and vendor practices
[Thu Sep 26 11:54:00 CDT 2024]
I was asked to help test a new tool for our on-call shifts called PagerDuty. To be clear, it does appear to be a very useful, complete and easy to use tool. However, I ran into a problem when attempting to install their mobile app, which is recommended during the initial account setup. As it turned out, it wouldn't install on my old Motorola Moto G7 running Android 10 because it requires Android 11 or newer. It wouldn't install on my Samsung Galaxy Tab S7 because it's running an even older version of Android. Which leads me to what truly is the main issue here: mobile device manufacturers release their products with their own branded version of Android that they quickly stop supporting (and, therefore, quickly stop releasing software updates for), even though the device is still perfectly good and usable. It should be obvious that this practice is, first of all, not environmentanlly sustainable, and second, not respectful of the freedom of end users. Not that this surprises anyone, of course. Yes, it is theoretically possible to install custom ROMs on these devices. However, only technically skilled people can do that, and even they risk damaging the devices and ending up with a door-brick in their hands. Not only that, but vendors also do whatever they can to make it difficult to install and run these customer images. The end result? Without a doubt, when it comes to mobile devices we are all less free than with traditional computers where at least we can install an open source operating system and run open source applications. {link to this entry} Nginx & FastCGI complaining about read-only SQLite database
[Tue Sep 24 12:26:53 CDT 2024]
After migrating my personal website from a server running Apache to another one running Nginx, a small web app I had written in PHP was unable to write to a SQLite database file. Here is a snippet of the error: 2024/09/24 12:12:41 [error] 1433147#1433147: *15249643 FastCGI sent in stderr: "PHP message: PHP Fatal error: \ Uncaught PDOException: SQLSTATE[HY000]: General error: 8 attempt to write a readonly database in /path/file.php:10 Stack trace: #0 /var/www/sacredchaos.com/main/apps/viewinglist/delete-movie.php(10): PDO->exec() #1 {main} thrown in ...Obviously, this was an issue with the file permissions. However, what wasn't so obvious is the fact that simply changing the permissions on the actual SQLite database file to be owned by user www-data (the user account Nginx runs under) is not enough. I also had to change the ownership of the parent directory to www-data. That fixed it. {link to this entry} Using same rule for two locations in Nginx
[Tue Sep 24 12:22:36 CDT 2024]
I needed to use the same rule for two different locations on my website, which is served by Nginx. However, it wasn't as straightforward as I had hoped. In the end, though, I found the answer on Stack Overflow: location ~ ^/(first/location|second/location)/ { ... }{link to this entry} "Smart glasses", once more
[Tue Sep 24 07:19:45 CDT 2024]
The Verge published an article on how Meta has a major opportunity to win the AI hardware race with their new "smart glasses". According to them, these glasses "exceeded expectations in a year when AI gadgets flopped." And yes, they are referring to the Humane Ai pin and the Rabbit r1 when they talk about devices that flopped. To be honest, I'm not sure these other Ray-Ban Meta glasses will succeed where others failed. Yes, they look more normal. But what do they give you for their $300 price tag? According to their description: So, basically, it sounds as if they may have gotten the engineering right. But do we need them? Are they useful? The use case scenarios mentioned in the article leave me unmoved. Perhaps because I'm not much of a social media guy? I suppose there is something else that I find a bit unnerving about this approach. It's almost as if we now see our lives as a constant television show. As if we need to broadcast every single moment of our lives. I'm not sure I like that. {link to this entry} The beginning of the "deep doubt" era
[Thu Sep 19 11:24:42 CDT 2024]
Benj Edwards writes on ArsTechnica about how the arrival of AI fakes may bring about the beginning of the "deep doubt" era: It seems clear that this is what's already happening. As technology develops, as AI becomes better and better at what it does, the amount of "noise" builds up towards what Edwards refers to as a "cultural singularity" moment where we can no longer trust most (any?) source of information. This in turn, must end up seriously eroding social trust. And, without social trust, how can you have societies in the first place? The issue is pretty serious. Yes, all this has happened before. Media of doubtful origin is nothing new. Political and cultural manipulation has always been around. The distortion of facts always was a key practice of power in all fields. However, what is different this time around is the sheer size of informational noise that is being spread around. We certainly run the risk of being buried in "informational junk". Edwards tries to come up with a more positive approach. His starting point is context: I'm afraid none of that will work, though. The context itself is a construct. It's a framework built with all the elements that constitute reality around us. And, if everything around us is noise,the context itself will quickly become unstable and impossible to chart. We'll be absolutely disoriented. Finally, Edwards is not convinced that we'll be able to solve this problem through technology itself: Again, if "well-crafted digital media artifacts will be completely indistinguishable from human-created ones", what hope do we have that we, humans, will be able to distinguish them? No amount of "context" will solve this problem. I wonder if, as a reaction, we may end up with luddite communities sprouting here and there. It reminds me of the future depicted in certain science fiction movies. {link to this entry} Releasing Windows as open source?
[Thu Sep 19 09:21:28 CDT 2024]
Thom Holwerda, the maintainer of the OS News website, argues in favor of the idea of releasing Windows as open source as the only viable way forward for Microsoft. Interestingly enough, this is is something my older son has been telling me for well over a decade now. He is convinced that, sooner or later, either Microsoft will release the Windows source code as open source or, more likely, will switch to Linux and build a GUI framework, a desktop and tools on top, sort of like Apple did when they released MacOS X. One of the most powerful arguments Holwerda has is the fact that the Windows OS only represents about 10% of Microsoft's total revenue (i.e., a total of $22 billion out of $211 billion). As he states, Azure alone is almost four times as large at $80 billion. Worse yet, LinkedIn brought in $15 billion in revenue. Not so far from the Windows revenue at all! If we also take into account the increasing costs of maintaining the code base and the fact that it is nearly irrelevant in the mobile market (one might say nearly so in the server market too), the argument in favor of releasing it as open source is indeed quite powerful. {link to this entry} |