Richard Dawkins asks ChatGPT if it is conscious
[Fri Feb 28 11:34:45 CST 2025]

This is sort of interesting. Apparently, Richard Dawkins asked ChatGPT if it has consciousness. He started by commenting that, as far as he could tell, ChatGPT passes the Turing test. Yet, it also denies to be conscious. ChatGPT's answer in regards to the Turing test was actually quite intelligent:

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

Then, when Dawkins asked if there is a way to tell whether an AI entity has subjective feelings, ChatGPT replied that there is a "haunting possibility that we might never be able to tell." Then, it went on to explain:

I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

I suppose, in the end, the question we must have ask ourselves is: if an AI entity doesn't reach consciousness but it behaves as it did, does it make a difference in a practical sense? To me, it does. But then, I was educated in the humanities. {link to this entry}

More on the usefulness of AI
[Mon Feb 17 11:00:28 CST 2025]

I've always thought it is extremely important to apply critical thinking in our daily lives. That is even more important nowadays, since we live in a world certainly dominated by marketing, hype and social pressures to avoid "rocking the boat". As fo late, AI is it. It's the latest wave. The latest trend. It's "in", and we are all supposed to be riding it. Apparently, nobody remembers how quite recently the latest and greatest were virtual reality and, before that, today's very much maligned social media. It's almost as if we, as a society, have no capacity whatsoever to slow down, take a step back, and reflect on things with a bit of perspective. Quite the contrary, instead of doing that, we follow the herd, and join the crowd riding the latest trend, whatever happens to be, without a modicum of skepticism or criticism. In the case of AI, the hype is all around us. Mind you, I don't doubt that AI is here to stay. I don't question that it is an useful technology. I don't think it will bring about the end of the human species, as some people appear to believe. But I do think certain corporate interests, as well as the above mentioned social tendency to accept the latest fad without any critical thinking, is blowing things out of proportion. I remain convinced that, useful as it is, AI's usefulness, at least today, is far more limited than what we hear in our media. In this sense, I truly liked seeing a blog entry recommending that, if you truly believe in AI, you should spend five minutes to ask it about stuff you know well. I have to agree with the author completely. That is also my experience.

But the worst part of this is not the errors. It’s not the blithe confidence with which the false facts are recited. It’s not even the bland “I apologize for the mistake in my previous response” to be followed by more utter nonsense. It’s that these incorrect answers look so plausible. For a lay-person — someone who, foolishly, has not been reading this blog for the last eighteen years — the answers given here look superficially reasonable. A kid doing a homework report on Brachiosaurus could take these answers and weave them into the submission without even having an inkling that they’re completely wrong. And the teacher who has to mark the essay will also likely swallow them.

Because LLMs get catastrophically wrong answers on topics I know well, I do not trust them at all on topics I don’t already know. And if you do trust them, I urge you to spend five minutes asking your favourite one about something you know in detail.

Whenever I've used AI in my field, I've seen the full range from moderately helpful and correct to patent lies told with a straight face. I've had ChatGPT recommend that I run certain commands that don't even exist, or edit configuration files that are nothing but a figment of its own imagination. Said this, I am not saying that AI is useless. Far from that. I also agree with the author's disclaimer at the bottom of his blog entry:

LLMs are useful for some classes of queries. I use them a lot to remind me of programming-language idioms, asking questions like “In Go, if I have an object satisfying the error interface, how can I determine which specific kind of error structure it is?”. They’re good for this because you can quickly determine whether the answer is correct or not, thanks to the merciless compiler. LLMs are not useless; they’re just way overhyped and misapplied.

Agreed, once again. I'm finding LLMs to be quite useful to refresh my mind on this or that topic that I already am knowledgeable about (and thus, where I can easily discern if the answer is wrong). It makes me more productive by speeding up the time to find the answer. But I wouldn't trust it for information on topics where I don't feel comfortable enough to distinguish what is correct or incorrect. In that sense, to me, the danger of AI is not so much the technology itself as the fact that way too many people see it as some sort of miracle imparting wisdom. It is not.

Somehow related to all this, I also came across an article asking if AI is making us stupider.

The researchers assessed how users perceive the effect of generative AI on their own critical thinking.

Generally speaking, critical thinking has to do with thinking well.

One way we do this is by judging our own thinking processes against established norms and methods of good reasoning. These norms include values such as precision, clarity, accuracy, breadth, depth, relevance, significance and cogency of arguments.

Other factors that can affect quality of thinking include the influence of our existing world views, cognitive biases, and reliance on incomplete or inaccurate mental models.

Overall, the conclusion of the study is that "higher confidence in AI equals less critical thinking". That is precisely my concern as adoption of AI spreads throughout our societies. {link to this entry}

The future of the GPLv3 to be decided in US Court of Appeals
[Thu Feb 13 14:46:01 CST 2025]

I had no idea, but apparently the future of the GPLv3 is going to be decided by the US Court of Appeals for the Ninth Circuit, at least as far as the US legal system is concerned, of course. According to the post on Slashdot:

At issue is whether licensors have the power to impose 'further restrictions' on open-source software, potentially undermining the explicit rights granted to users and developers under AGPLv3, GPLv3, and LGPLv3.

The legal battle has been going on for years, even though little public attention has been paid to it so far. As someone stated in the comments though, plenty of opensource projects (including the Linux kernel) never abandoned the old GPLv2 for the GPLv3 license. So, I don't know for sure to what extent the decision will also have an impact on that older license. I'm not a lawyer. In general, though, I get the feeling that between legal battles and business decisions, the objective is to gut out the whole free software and open source movement (i.e., to deactivate the threat). {link to this entry}

Ads on your car's dashboard: it was only a matter of time...
[Tue Feb 11 20:08:26 CST 2025]

The way things are, you knew it was only a matter of time... Jeep introduces pop-up ads that appear every time you stop. The enshittification of our everyday life continues. It's unstopable. Capitalism needs to commercialize every single area of human life. No, it's not an ethical issue. It's intrinsic to the system itself. {link to this entry}

Opting out of personalized ads
[Mon Feb 10 14:31:56 CST 2025]

I had not idea this was even possible, but apparently we can opt out of personalized ads, at least in the case of Google and Facebook. While we will still be exposed to ads anyways, at least they won't be personalized. {link to this entry}

We are destroying software (a manifesto)
[Sat Feb 8 16:45:08 CST 2025]

Just came across We are destroying software, a manifesto of sorts:

	We are destroying software by no longer taking complexity into account when adding 
	features or optimizing some dimension.

	We are destroying software with complex build systems.

	We are destroying software with an absurd chain of dependencies, making everything 
	bloated and fragile.

	We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, 
	reinventing the wheel is how you learn how things work, and is the first step to make 
	new, different wheels.

	We are destroying software by no longer caring about backward APIs compatibility.

	We are destroying software pushing for rewrites of things that work.

	We are destroying software by jumping on every new language, paradigm, and framework.

	We are destroying software by always underestimating how hard it is to work with 
	existing complex libraries VS creating our stuff.

	We are destroying software by always thinking that the de-facto standard for XYZ is 
	better than what we can do, tailored specifically for our use case.

	We are destroying software claiming that code comments are useless.

	We are destroying software mistaking it for a purely engineering discipline.

	We are destroying software by making systems that no longer scale down: simple things 
	should be simple to accomplish, in any system.

	We are destroying software trying to produce code as fast as possible, not as well 
	designed as possible.

	We are destroying software, and what will be left will no longer give us the joy of 
	hacking.
	
A few of those statements definitely touched me. {link to this entry}

Using local translation models on Android
[Mon Feb 3 11:56:32 CST 2025]

Here is a detailed explanation of how to use Mozilla's bergamot-translator to do language translations locally on your Android device. Unfortunately, the author didn't upload the app to F-Droid yet. But I cannot wait to run it. Sure, it's less polished than anything from Google, but it also is more respectful of my privacy. {link to this entry}

A selfish reason for releasing code as open source
[Mon Feb 3 06:57:34 CST 2025]

Found on my RSS feeds: a selfish reason for releasing code as open source.

I didn’t really get heavily back into open source until about maybe six years ago when I’d been working for a big company in the US, and I got frustrated that all of the code I was writing, I’d never get to use again.

I realized that one of the best things about open source software is that you can solve a problem once and then you can slap an open source license on that solution and you will never have to solve that problem ever again, no matter who’s employing you in the future.

It’s a sneaky way of solving a problem permanently.

That's asuming your contract allows you to do something like that, of course. But it makes sense. {link to this entry}