A Jamendo client for Linux, anyone?
[Mon Sep 30 14:39:53 CDT 2024]

Searching around for a way to listen to music released with an open license, I came across Jamendo once again. I say "once again" because, years ago, I used to listen to music from that platform while working away on my laptop. If I remember correctly, at the time I used the Rhythmbox player. However, as far as I can see, that player no longer supports Jamendo. It includes a plugin to enable access to Magnatune. But nothing for Jamendo. So, I decided to search around for a music player that supports Jamendo and, to my surprise, couldn't find anything. I read somewhere that Cantata does. However, it no longer appears to be maintained. Also, I installed it, and it didn't work for me at all. I had a similar issue with Clementine, which is actively maintained. But I found it difficult to use and, at any rate, I couldn't access any music from Jamendo after I installed it. So, back to Spotify it is, I'm afraid. {link to this entry}

Mozilla and public AI
[Mon Sep 30 14:34:28 CDT 2024]

And, speaking of AI, it's nice to see that Mozilla is trying to promote open source and the public interest when it comes to AI too. As they state:

Private and public initiatives have existed side by side for a long time. While private innovation often pushes the frontier of what’s possible, public alternatives can make those innovations more accessible and beneficial for everyone. These parallel products and services give people more choices, create market pressure on each other to be more trustworthy and innovative, distribute power across more people and organizations, and create more resilient and healthier economies.

So, where are the public alternatives for AI? They are starting to emerge, with some governments subsidizing access to computational resources, and nonprofit AI labs collectively putting nearly $1 billion into open source AI research and models. These are important steps forward, but they are not enough to create true public alternatives to the results of the hundreds of billions of dollars going into private AI. This status quo means some critical projects — such as using AI to detect illegal mining operations, facilitate deliberative democracy, and match cancer patients to clinical trials — remain under-resourced relative to their potential societal value. In parallel, Big Tech is ramping up efforts to push policymakers to support private AI infrastructure, which could further cement the dominance of just a few companies in creating the future of AI.

(...)

At Mozilla, we’re committed to doing our part by building key parts of the Public AI ecosystem. We will help build public alternatives for the data needed in AI development by doubling down on our Common Voice platform, further expanding access to multilingual voice data to train AI models that represent the diversity of languages around the world. We will invest in open source AI via Mozilla.ai, Mozilla Ventures and Mozilla Builders, which supports the development of tools like llamafile that are making it easier to run AI models locally rather than needing to use commercial cloud providers. And we will continue to support the broader AI accountability ecosystem that is vital for Public AI, continuing to steer our fellowships and data programs toward enabling more people to steer and co-create AI.

While I still think there is a serious danger of applying the "new shiny tool" of AI to absolutely everything around us under the assumption that it's some sort of magic wand that will solve all our problems, there is no doubt in my mind that AI has potential and it will be very useful in certain areas. So, it's nice to see Mozilla promoting the public interest in this area. {link to this entry}

Do AI coding assistants truly help?
[Mon Sep 30 14:26:35 CDT 2024]

Slashdot shared today that, apparently, it's not clear whether AI coding assistants truly help. Basically, a report from the firm Uplevel "found no significant improvements for developers" using Microsoft Copilot. On the contrary, using the tool introduced 41% more bugs. Also, developers' activity shifted more to reviewing code, but it didn't appear to significantly increase their productivity. My approach to this topic, as to anything else new, is that it will most likely help in some areas, but not in others. It won't be the panacea that so many people think it is. It will also introduce a bunch of issues and problems that we didn't foresee. In other words, it will be neither good nor bad, but a mixture of both, like everything else in life. One way or another, we'd better do our best to counterbalance the current hype surrouding everything related to AI. Otherwise, we risk a very serious burst of the bubble sooner rather than later. {link to this entry}

The dangers of a privatized Internet
[Fri Sep 27 07:32:50 CDT 2024]

El Salto Diario published en interview with Marta G. Franco, the author of Las redes son nuestras, a book that reminds us that, not so long ago, the Internet was something other than a playpen where large companies appropriate as much personal information as they can, sell our most private data and try to manipulate us.

En las páginas de Las redes son nuestras se destaca el papel que las redes sociales, concretamente Twitter, desarrollaron en experiencias colectivas como el 15M y cómo esa potencia comunicativa fue absorbida posteriormente por empresas que cerraron el grifo. Marta G. Franco habla también de tres “robos” en la historia de internet. El primero sucedió cuando la infraestructura creada con financiación pública para posibilitar la existencia de la red acabó en manos de empresas privadas. En este sentido es muy pertinente la mención al caso de España, donde la empresa pública Telefónica se encargó de montar la RedIRIS, con la financiación del Ministerio de Educación y Ciencia. A mediados de los años 90 comenzaron a aparecer multitud de empresas privadas que ofrecían ese servicio de conexión a internet. Telefónica fue privatizada en 1997 por el Gobierno de José María Aznar, cerrando la operación que había iniciado Felipe González.

El segundo robo fue la monetización y generación de negocio por parte de empresas con la creación de contenido de carácter altruista llevada a cabo por usuarios de la llamada web 2.0 a principios de siglo XXI.

Y el tercero, según la autora, se ha producido cuando las redes sociales, convertidas ya en sinónimo de internet, han sido tomadas por ejércitos de bots y los algoritmos han puesto a funcionar la maquinaria para privilegiar determinados mensajes y contenidos que generan enganche y conflicto, desde unas posiciones muy conservadoras. “Las herramientas que antes nos fueron útiles ahora nos son ajenas”, resume Marta G. Franco, precisando que “las plataformas de la web 2.0 nunca fueron nuestras, pero durante muchos años nos sirvieron, más o menos, para dialogar, aprender, conocer gente, mantener amistades, difundir ideas disruptivas y hacer política”.

That's pretty much what happened, indeed. The Internet was created by the public sector. I know it's difficult to believe after hearing the libertarian propaganda for decades now. To me, what's interesting is that the most creative part of it (i.e., the idea of connecting disparate centers of research in a common decentralized network, the TCP/IP protocol, a way to share documents from one's computer, email, real-time chat, bulletin boards and online forums, the world wide web...) was actually invented by the supposedly backwards and unimaginative public sector. Where the private sector shone was in finding the way to make money with it by spreading it to every single corner of the planet, making it the center of our lives and controlling it.

Mind you, the author's depiction of the Internet in the early 2000s sounds a bit idyllic to me:

Lejos de la internet comercial y privativa, Las redes son nuestras también valora la actividad de hackers que, a principios de siglo XXI, actuaban con lógicas opuestas a las empresariales. La organización de hacklabs, el desarrollo compartido de software libre y otras acciones militantes digitales son glosadas en el libro en una mirada al pasado que se complementa con la del presente y el futuro, mencionando ejemplos que hoy recogen parte de aquellas enseñanzas para mantener viva la llama de una red de redes útil a la hora de mejorar el mundo que habitamos, dentro y fuera de la pantalla. “La lucha por internet no es más importante que las luchas climáticas, sindicales, decoloniales, antirracistas, antifascistas, por la salud mental, por la vivienda, por la justicia epistémica… pero internet sí es una herramienta sin la que muy difícilmente podremos avanzar en todas ellas”, opina Marta G. Franco.

I'd say by then, by the early 2000s, "Big Tech" was already on the rise and clearly appropriating the space. Yes, there were still remnants of resistance. Not only from the most militant groups (e.g., the hacklabs), but also from regular people who tried to use the new technology for their own purposes in their own daily lives. That's precisely when the "platforms" showed up as a way to privatize and commercialize that space (or, as most people would say these days, to "monetize" it). For, under capitalism, no spehere of our lives can go without commercialization.

Franco's idea of becoming an "inhabitant" of the Internet, rather than a mere "user", sounds interesting:

Hablo de ser “habitante” para reivindicar una forma de estar en internet distinta a la de “usuaria”. Ser habitante de un lugar supone tener cierta agencia y conciencia del espacio en el que vives, convivir con tus vecinas y aspirar a mecanismos democráticos para la gestión de lo común. Casi todas las personas hemos renunciado a ser habitantes de internet porque solemos ser usuarias pasivas de unas pocas plataformas privadas, sin cuestionar cómo nos vienen dadas ni intentar incidir sobre su gobernanza. Mi propuesta pasa por recuperar la noción, y el deseo, de que en internet debe haber espacio público y comunal.

The goal is not impossible, but it definitely goes against the current. The way I see it, the Internet and the new technologies is one more area where we can fight against the big corporate interests in an effort to decolonize our lives. But it's just that. One more area. There are plenty other areas of our daily lives that are almost taken over by these commercial interests. Areas where we become mere "users", passive consumers of products and content. {link to this entry}

OpenAI, business demands and sustainability
[Fri Sep 27 07:13:49 CDT 2024]

I'm not sure how many times we are going to go through this cycle, but here we are again. I suppose I'm old enough to be familir enough with the story. Not so long ago, a good amount of people were all excited about AI and, in particular, the fact that a company called OpenAI, a spinoff of a non-profit organization, was leading the charge. After all, we could trust a non-profit much better than "Big Tech", right Well, they just announced yesterday that OpenAI is to remove non-profit control and give Sam Altman equity. Oh, surprise. Nobody could see this one coming, right? It's not as if we haven't been here before. The problem, what we don't want to see, is that the issue is not personal ethics or personal character, but the rules of the game. This is not a matter of personal decision, but rather of systemic forces. In any case, yesterday we also read that OpenAI asked the US Government to approve energy-guzzling 5GW data centers:

OpenAI hopes to convince the White House to approve a sprawling plan that would place 5-gigawatt AI data centers in different US cities, Bloomberg reports.

The AI company's CEO, Sam Altman, supposedly pitched the plan after a recent meeting with the Biden administration where stakeholders discussed AI infrastructure needs. Bloomberg reviewed an OpenAI document outlining the plan, reporting that 5 gigawatts "is roughly the equivalent of five nuclear reactors" and warning that each data center will likely require "more energy than is used to power an entire city or about 3 million homes."

According to OpenAI, the US needs these massive data centers to expand AI capabilities domestically, protect national security, and effectively compete with China. If approved, the data centers would generate "thousands of new jobs," OpenAI's document promised, and help cement the US as an AI leader globally.

And there we have it. The path to sustainability. Ever heard of the Jevons paradox? That's another one we forget time and time again when we get too excited about a shiny new technology that is going to save us. Once again, we don't take the rules of the game into account. The systemic forces I was referring to above. No amount of "good character" can overcome those. {link to this entry}

PagerDuty, Android, and vendor practices
[Thu Sep 26 11:54:00 CDT 2024]

I was asked to help test a new tool for our on-call shifts called PagerDuty. To be clear, it does appear to be a very useful, complete and easy to use tool. However, I ran into a problem when attempting to install their mobile app, which is recommended during the initial account setup. As it turned out, it wouldn't install on my old Motorola Moto G7 running Android 10 because it requires Android 11 or newer. It wouldn't install on my Samsung Galaxy Tab S7 because it's running an even older version of Android. Which leads me to what truly is the main issue here: mobile device manufacturers release their products with their own branded version of Android that they quickly stop supporting (and, therefore, quickly stop releasing software updates for), even though the device is still perfectly good and usable. It should be obvious that this practice is, first of all, not environmentanlly sustainable, and second, not respectful of the freedom of end users. Not that this surprises anyone, of course. Yes, it is theoretically possible to install custom ROMs on these devices. However, only technically skilled people can do that, and even they risk damaging the devices and ending up with a door-brick in their hands. Not only that, but vendors also do whatever they can to make it difficult to install and run these customer images. The end result? Without a doubt, when it comes to mobile devices we are all less free than with traditional computers where at least we can install an open source operating system and run open source applications. {link to this entry}

Nginx & FastCGI complaining about read-only SQLite database
[Tue Sep 24 12:26:53 CDT 2024]

After migrating my personal website from a server running Apache to another one running Nginx, a small web app I had written in PHP was unable to write to a SQLite database file. Here is a snippet of the error:

2024/09/24 12:12:41 [error] 1433147#1433147: *15249643 FastCGI sent in stderr: "PHP message: PHP Fatal error:  \
Uncaught PDOException: SQLSTATE[HY000]: General error: 8 attempt to write a readonly database in /path/file.php:10
Stack trace:
#0 /var/www/sacredchaos.com/main/apps/viewinglist/delete-movie.php(10): PDO->exec()
#1 {main}
  thrown in ...
Obviously, this was an issue with the file permissions. However, what wasn't so obvious is the fact that simply changing the permissions on the actual SQLite database file to be owned by user www-data (the user account Nginx runs under) is not enough. I also had to change the ownership of the parent directory to www-data. That fixed it. {link to this entry}

Using same rule for two locations in Nginx
[Tue Sep 24 12:22:36 CDT 2024]

I needed to use the same rule for two different locations on my website, which is served by Nginx. However, it wasn't as straightforward as I had hoped. In the end, though, I found the answer on Stack Overflow:

location ~ ^/(first/location|second/location)/ {
  ...
}
{link to this entry}

"Smart glasses", once more
[Tue Sep 24 07:19:45 CDT 2024]

The Verge published an article on how Meta has a major opportunity to win the AI hardware race with their new "smart glasses". According to them, these glasses "exceeded expectations in a year when AI gadgets flopped." And yes, they are referring to the Humane Ai pin and the Rabbit r1 when they talk about devices that flopped. To be honest, I'm not sure these other Ray-Ban Meta glasses will succeed where others failed. Yes, they look more normal. But what do they give you for their $300 price tag? According to their description:

This is a device that can easily slot into people’s lives now. There’s no future software update to wait for. It’s not a solution looking for a problem to solve. And this, more than anything else, is exactly why the Ray-Bans have a shot at successfully figuring out AI.

That’s because AI is already on it — it’s just a feature, not the whole schtick. You can use it to identify objects you come across or tell you more about a landmark. You can ask Meta AI to write dubious captions for your Instagram post or translate a menu. You can video call a friend, and they’ll be able to see what you see. All of these use cases make sense for the device and how you’d use it.

In practice, these features are a bit wonky and inelegant. Meta AI has yet to write me a good Instagram caption and often it can’t hear me well in loud environments. But unlike the Rabbit R1, it works. Unlike Humane, it doesn’t overheat, and there’s no latency because it uses your phone for processing. Crucially, unlike either of these devices, if the AI shits the bed, it can still do other things very well.

So, basically, it sounds as if they may have gotten the engineering right. But do we need them? Are they useful? The use case scenarios mentioned in the article leave me unmoved. Perhaps because I'm not much of a social media guy? I suppose there is something else that I find a bit unnerving about this approach. It's almost as if we now see our lives as a constant television show. As if we need to broadcast every single moment of our lives. I'm not sure I like that. {link to this entry}

The beginning of the "deep doubt" era
[Thu Sep 19 11:24:42 CDT 2024]

Benj Edwards writes on ArsTechnica about how the arrival of AI fakes may bring about the beginning of the "deep doubt" era:

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

(...)

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider "truth" in media will need recalibration.

It seems clear that this is what's already happening. As technology develops, as AI becomes better and better at what it does, the amount of "noise" builds up towards what Edwards refers to as a "cultural singularity" moment where we can no longer trust most (any?) source of information. This in turn, must end up seriously eroding social trust. And, without social trust, how can you have societies in the first place? The issue is pretty serious. Yes, all this has happened before. Media of doubtful origin is nothing new. Political and cultural manipulation has always been around. The distortion of facts always was a key practice of power in all fields. However, what is different this time around is the sheer size of informational noise that is being spread around. We certainly run the risk of being buried in "informational junk".

Edwards tries to come up with a more positive approach. His starting point is context:

All meaning derives from context. In a sense, crafting our own interrelated web of ideas is how we make sense of reality. Considering any idea standing alone without knowing how it links up conceptually with the existing world is meaningless. Along those lines, attempting to authenticate a potentially falsified media artifact in isolation doesn't make much sense.

(...)

When we are evaluating the veracity of online media, it's important to rely on multiple corroborating sources, particularly those showing the same event from different angles in the case of visual media or reported from multiple credible sources in the case of text. It's also useful to track down original reporting and imagery from verified accounts or official websites rather than trusting potentially modified screenshots circulating on social media. Information from varied eyewitness accounts and reputable news organizations can provide additional perspectives to help you look for logical inconsistencies between sources.

I'm afraid none of that will work, though. The context itself is a construct. It's a framework built with all the elements that constitute reality around us. And, if everything around us is noise,the context itself will quickly become unstable and impossible to chart. We'll be absolutely disoriented.

Finally, Edwards is not convinced that we'll be able to solve this problem through technology itself:

Although AI detection tools exist, we strongly advise against using them because they are currently not based on scientifically proven concepts and can produce false positives or negatives. Instead, manually looking for telltale signs of logical inconsistencies in text or visual flaws in an image, as identified by reliable experts, can be more effective.

It's likely that in the near future, well-crafted synthesized digital media artifacts will be completely indistinguishable from human-created ones. That means there may be no reliable automated way to determine if a convincingly created media artifact was human or machine-generated solely by looking at one piece of media in isolation (remember the sermon on context above). This is already true of text, which has resulted in many human-authored works being falsely labeled as AI-generated, creating ongoing pain for students in particular.

Again, if "well-crafted digital media artifacts will be completely indistinguishable from human-created ones", what hope do we have that we, humans, will be able to distinguish them? No amount of "context" will solve this problem. I wonder if, as a reaction, we may end up with luddite communities sprouting here and there. It reminds me of the future depicted in certain science fiction movies. {link to this entry}

Releasing Windows as open source?
[Thu Sep 19 09:21:28 CDT 2024]

Thom Holwerda, the maintainer of the OS News website, argues in favor of the idea of releasing Windows as open source as the only viable way forward for Microsoft. Interestingly enough, this is is something my older son has been telling me for well over a decade now. He is convinced that, sooner or later, either Microsoft will release the Windows source code as open source or, more likely, will switch to Linux and build a GUI framework, a desktop and tools on top, sort of like Apple did when they released MacOS X. One of the most powerful arguments Holwerda has is the fact that the Windows OS only represents about 10% of Microsoft's total revenue (i.e., a total of $22 billion out of $211 billion). As he states, Azure alone is almost four times as large at $80 billion. Worse yet, LinkedIn brought in $15 billion in revenue. Not so far from the Windows revenue at all! If we also take into account the increasing costs of maintaining the code base and the fact that it is nearly irrelevant in the mobile market (one might say nearly so in the server market too), the argument in favor of releasing it as open source is indeed quite powerful. {link to this entry}