Introduction

I was asked a while ago by a junior about using ChatGPT for programming and if it’s a good idea or not, which reminded me of why you should, as a rule, distrust technology. Aside from ChatGPT, which you obviously shouldn’t rely on for programming by the way (despite what the internet wants you to believe), technology always has a bias, sometimes a malicious bias even, that I like to remind people of, so let’s dedicate this post to that.

For the record I’m not stating anything new here. The title of this post, On Trusting Trust, is an homage to the paper written by Ken Thompson in 1984 on roughly the same topic, and if all this post achieves is to make you go read that paper it would have already accomplished its purpose.

Bugs, Trojan Horses and Bias

There’s a funny saying in German that goes like: “Don’t believe any statistic that you haven’t forged yourself”, and this similarly titled lecture “Don’t believe any scan you haven’t forged yourself” from a few years back is one of the examples I like to bring up when talking about this topic. The brilliant lecture is sadly not yet translated to English, but the story itself got so big you can find info on it in English pretty easily. Long story short, Xerox scanners, the kind used in offices and government agencies nonetheless, changed numbers and characters of scanned documents, resulting in the digital documents being unoriginal.

You might think that this obviously was just a bug, and no way a company would deliberately do something like that, and sure, in this case Xerox just had overly aggressive compression algorithms it seems. In the same way Google didn’t intend for their Bard AI to make a mistake that costed them billions of dollars, yet it’s naive to generalize this to all Google services, beginning with search. Let’s do this little experiment: Go on Google and search for “The case for banning marijuana”. If your results are anything like mine, you won’t see ANY link arguing for banning marijuana, and you’re instead presented with a plethora of arguments for legalizing it, which is the exact opposite of what you searched for.

And the ever increasing demand for easier methods of information retrieval leads to exacerbating this problem by giving rise to applications, such as ChatGPT, that don’t bother with giving you links and give you direct answers instead. Google has been trying that for a while, and the results are as you expect. Some researchers call this the “dilemma of the direct answer.”

Bugs are unintentional mistakes in program design. A program intentionally maliciously misbehaving, however, is called something else, it’s called a trojan horse. Technology that acts as an interface to our access to information has the ability to shape our opinion and perception of the world, something which should not be understated, and is definitely being used for that purpose already.

Computer Literacy

In the book ‘Program or be Programmed’ Douglas Rushkoff describes the situation like this:

You’re like Miss Daisy, getting driven from place to place. Only the car has no windows and if the driver tells you there’s only one supermarket in the county, you have to believe him. The more you live like that, the more dependent on the driver you become, and the more tempting it is for the driver to exploit his advantage.

And while this is quite the good metaphor, and all in all the book is a recommended read, I can’t understand idea where the author is coming from when he proposes learning programming to solve this problem. What people call digital literacy. In fact, it’s programmers who would feel the most hopeless.

What Ken Thompson demonstrated in 1984 was that you’re theoretically able to include malicious code into any C program using the C Compiler, in a way that’s almost impossible to discover, ending his aforementioned paper with this conclusion:

You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.

That’s without mentioning that scrutinizing source code is a luxury you can’t always afford yourself anyway. Any modest project nowadays includes countless libraries written by countless individuals and organizations, open-source and commercial. A programmer’s mere advantage is acknowledging the hopelessness of the situation.

Takeaways

My aim with this post wasn’t to convince you of giving up your smart devices or (worse) giving in to your smart devices. I believe it’s not a case of either/or, rather finding a middle ground of using any tool with healthy skepticism and mindfulness. Are digital technologies and especially AI necessary and useful? Yes. Can you trust them? No.

When I think of the dangers of AI to society I don’t think about the loss of jobs it could cause, I see that as a natural occurrence that has historical counterparts. What I’m more worried about, and what I hope this post helped illustrate, is AI’s ability to deceive us in ways not possible before, I’m worried about people relinquishing their thinking, decision making and information sources to AI, unaware of its inherit and possibly malicious biases.

I believe that awareness and skepticism of technology goes a long way.