Generative AI is the new Kalashnikov
“You tell your lies and you think nobody knows. But there are two people who know. Yes- two people. One is le bon Dieu - and the other is Hercule Poirot.”
Agatha Christie, The Mystery of the Blue Train
M. Poirot is an exacting man. A man in search of the truth. Many of the other characters that populate the books of Agatha Christie are also painfully concerned with the truth. Either obscuring it, or uncovering it, depending on which side of the pistol, lead pipe, or poison bottle they might be. In Poirot’s world, there are two states of being for any claim - truth and lies - and it is his job to sort one from the other. Philosophy has long occupied itself with a similar job, equally self-appointed. In the Western tradition discussion around truth can trace its origins to Xenopohanes, Heraclitus and other Presocratics.
In the discussion around generative AI a lot of the debate has focused around this same binary - truth and falsehood. Many are worried that the ability of generative AI tools to produce images, audio, video, and text may be used, or is already being used, to produce lies. Fake images, fake video, fake audio. That these powerful AI systems are a means of obscuring the truth or outright lying. We need, then, to clarify just what a lie might be, and why there is something more worrying hiding in plain sight.
In Harry Frankfurt’s On Bullshit (1986), he outlines a sharp and crucial distinction between lying and what he terms bullshit. The liar is someone very much concerned with the truth, he knows what it is and either wants us to believe something else or wants to hide the truth in some way. In Poirot’s world there are many liars;
“…the fact about himself that the liar hides is that he is attempting to lead us away from a correct apprehension of reality; we are not to know that he wants us to believe something he supposes to be false.”
For Frankfurt, the liar is looking to lead us away from reality - perhaps because he does not want us to know the truth, or because there is some other claim he wishes us to believe. Penn Jillette, the great magician, often speaks of magicians as liars for exactly this reason. A magician knows the truth of the trick and wants you to believe something else entirely. The bullshitter is different;
“The fact about himself that the bullshitter hides, on the other hand, is that the truth-values of his statements are of no central interest to him; what we are not to understand is that his intention is neither to report the truth nor conceal it.”
Unlike the liar, the bullshitter has no interest in the truth, it is not what he is concerned with. The only thing the bullshitter is interested in is masking is what he is really up to. Bullshit is a smokescreen, not to hide the truth, but to hide the bullshitter.
“What bullshit essentially misrepresents is neither the state of affairs to which it refers nor the beliefs of the speaker concerning that state of affairs. Those are what lies misrepresent, by virtue of being false. Since bullshit need not be false, it differs from lies in its misrepresentational intent. The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.”
Whilst a great deal of angst surrounding generative AI has focused on deceit - the deliberate, and intentional obscuring of truth - we should perhaps be just as concerned, if not more so, about bullshit. The great power of generative AI, at least for now, is not that it produces works of unparalleled quality. The images, text, audio, and video it constructs are well within the range of ability of artists, writers, photographers, and filmmakers as anyone who has spent 5 minutes with the tools can tell you. Instead, it is how cheap, efficient, and easy the tools are that is so striking and, potentially, so destabilising. In this way, the real risk of generative AI is not that we will be tricked by its lies, but that we will be blinded by its bullshit. This may be far more dangerous. Frankfurt makes similar claims about lies and bullshit;
“[The bullshitter] does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.”
It is certainly the case that generative AI tools could well come to such a level of sophistication that incredibly volatile imagery, video, or audio material is produced. Faked phone recordings of political leaders in dialogue, scandalous images of a celebrity or business tycoon, or videos portraying soldiers committing atrocities when they should be protecting the innocent. All such cases could be seismic in the damage they cause, but all such images are examples of lies rather than bullshit. In each of those cases, some bad actor wishes us to believe something that is not true as if it were. Such lies require thought, attention, and diligence. As Frankfurt says, of lies;
“Telling a lie is an act with a sharp focus. It is designed to insert a particular falsehood at a specific point in a set or system of beliefs, in order to avoid the consequences of having that point occupied by the truth. This requires a degree of craftsmanship, in which the teller of the lie submits to objective constraints imposed by what he takes to be the truth. The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of that truth.”
AI tools may help, but even with those it isn’t a simple thing to make such lies. It requires precision, craft, and skill. Such lies would be akin to high-grade weapons. Hard to produce, requiring knowledge, training, and expertise but devastating, if used correctly. However such weapons are not accessible to all. Far more worrying is the weapon of the mob.
In 1947 Mikhail Kalashnikov invented the first of the Kalashnikov family of weapons. More generally known as the AK47, it is a weapon that has endured for more than 75 years and has been seen in almost every insurgency, revolution, and armed conflict across the globe ever since. It is cheap, in some situations just a few dozen dollars. It is also extremely reliable, easy to manufacture, simple to use, and utterly ubiquitous.
“With as few as eight moving parts, depending on the version, an AK-47 can be field-stripped and reassembled by an illiterate 8-year-old Ugandan after less than an hour of training.”
There are estimated to be somewhere between 70 million and 100 million of them on the planet - accounting for around 20% of all weapons in existence. With such numbers, and with such a low barrier to entry for armed militias, drug cartels, and guerilla fighters, it is no wonder that the AK47 is responsible for orders of magnitude more deaths than atomic bombs. This isn’t to say we shouldn’t worry about nuclear weapons, but that we can often be blinded by the power and sophistication of great technologies such that we don’t see the potential harm caused by those that appear less ostentatious. The AK47 kills more people not because it’s more powerful but because it’s everywhere. Why is it in the hands of child soldiers, drug runners, and political revolutionaries alike? It is not because it’s good. It has limited range, and limited accuracy, and is mediocre, at best, as a firearm. It’s so popular because it’s good enough.
In this way, generative AI is the new Kalashnikov. It is good enough. It might not generate the kinds of media people are worried about - it doesn’t have that power - but it has by far the bigger reach because it is in so many hands. Whilst we worry that sophisticated AI tools may come to be used to perpetrate lies and falsehoods, we may miss the low-level, generative AI tools endlessly muddying the communal waters we exist in. Through sheer quantity, these tools may cause far greater harm, in time, than any more powerful technologies just as the AK47 has placed more people in the ground than all of the atomic weapons ever built.
The claim will come, though, just as it (vacuously) does for firearms that it isn’t the fault of the technology. Guns don’t kill people, of course, people do. I won’t address how specious the argument for firearms is, but it is reasonable to say a similar argument is made for a whole range of technologies, including generative AI, and there may be a stronger case to be made in that instance than in the one concerning guns. The claim, really, is that technology is neutral. Whether it is a stone axe, a pencil, a nuclear reactor, or an iPhone, we are told that technology has no inherent moral inclination, it simply is. It’s us, that put the ethical spin on things when we decide upon the ways in which we want to use the technology.
This is bullshit, in the Frankfurt sense. Any such claim is not concerned with true or false; no one really believes such a thing. Anyone making that claim is actually concerned with you not seeing what they’re really up to. And what they’re really up to, of course, is some immoral or unethical use of technology. The person who claims guns are not the problem, people are, is not trying to clarify a reasoned position against which you can and should argue, but rather they are seeking to hide the fact that guns, in their hands, are very much the problem. Neil Postman explains this facet of technology in his works Amusing Ourselves to Death and Technopoly;
“Every technology has an inherent bias. It has within its physical form a predisposition toward being used in certain ways and not others. Only those who know nothing of the history of technology believe that a technology is entirely neutral.”
Postman describes the ways in which technology has natural, or inherent, moral dimension by virtue of the appropriate use to which it is put;
“Each technology has an agenda of its own. It is, as I have suggested, a metaphor waiting to unfold. The printing press, for example, had a clear bias toward being used as a linguistic medium. It is conceivable to use it exclusively for the reproduction of pictures… But in fact there never was much chance that the press would be used solely, or even very much, for the duplication of icons. From its beginning in the fifteenth century, the press was perceived as an extraordinary opportunity for the display and mass distribution of written language. Everything about its technical possibilities led in that direction. One might even say it was invented for that purpose.”
Postman is interested in applying this reasoning to mass media and television, but let’s ask ourselves the same questions about generative AI. What direction are we led in by its technical possibilities? We know that the defining features of generative AI are that it can copy, mimic, and create through recombination and that it can do so with ease, efficiency, and accessibility. What sort of ends does this then lend itself to? Whilst it can be used for creative ends, for the purposes of research, and to produce new works, what it really has baked in is a propensity for bullshit. To produce content without meaning, to obscure the truth through replication and recombination, and to do so at a vast scale.
Will it be used for other things? Absolutely. In Cambodia, artists have made artwork from Kalashnikovs, but it’s not really what they’re for deep down. Instead, generative AI’s greatest threat to us is not job loss, not occupying the territory of artists, writers, and filmmakers, and certainly not destabilising political systems and the world of business through creating lies and fakery - although all of those are threats - but by the dissemination of bullshit. It will flood into our shared waters and muddy, poison, and obscure everything we use to keep ourselves anchored and oriented. Postman talks of this in terms of information and its degradation in the light of mass media, but the same can be said in precisely the same way in response to generative AI;
“Like the Sorcerer’s Apprentice, we are awash in information. And all the sorcerer has left us is a broom. Information has become a form of garbage, not only incapable of answering the most fundamental human questions but barely useful in providing coherent direction to the solution of even mundane problems.”
This ecological metaphor, one of poisoning our waterways, may prove useful to us in thinking about the problem. Technologies are often spoken of in an additive or subtractive way. A particular technology will add some capability or subtract some problems. AI, for example, can add hours to our day, add skills and capabilities we may not otherwise have, and subtract tedious repetitive tasks, or difficult challenges from our lives. The same discussion was had for the printing press, the steam engine, the combustion engine, the computer, and the internet. And in every case, we can see that this way of framing things is wrong. Technologies - revolutionary technologies at least - don’t simply add some things and take some things away, their impact is far too dispersed, interconnected, and wide-ranging for that. It isn’t like getting a new car or buying yourself a better TV. The impact is far more ecological. Postman explains this using a simple analogy;
“Technological change is neither additive nor subtractive. It is ecological. I mean “ecological” in the same sense as the word is used by environmental scientists. One significant change generates total change. If you remove the caterpillars from a given habitat, you are not left with the same environment minus caterpillars: you have a new environment, and you have reconstituted the conditions of survival; the same is true if you add caterpillars to an environment that has had none. This is how the ecology of media works as well. A new technology does not add or subtract something. It changes everything.”
We worry about the bleeding-edge technologies in the way that we may worry about a tiger getting loose at the zoo. If it comes across you whilst you’re walking the dog, you’re probably in for a bad time. More damaging, by orders of magnitude, are the rabbits introduced to Australia by the First Fleet, the kudzu vine brought to the southern United States, or the zebra mussels in the waters of the Great Lakes. These species didn’t hunt their competition to extinction, they simply overwhelmed them. And no one saw those dangers until they had spread beyond control and devastated the ecosystem.
In the case of generative AI, it is our knowledge ecosystem that is about to be brutally harmed. It is an ecosystem that has withstood a number of challenging epochs in the past, but that may be in a weaker, and poorer condition than ever before. Fake news, post-truth thinking, decades of advertising, marketing, and cheap politics, and the grinding, toxic progress of neo-liberal capitalism leave us vulnerable in the extreme. The discourse will, inevitably, revolve around the outliers - those dazzling, enticing questions of true and false, of lies, deceit, and manipulation, whilst we gradually succumb to suffocation by bullshit, let loose by generative AI. It will cloud and obscure all meaningful discussion, rendering our concepts of truth obsolete, meaningless, and empty. We may well now stand, not at the death of truth, but at the end of truth with not even God, le bon Dieu, able to discern what is from what is not.
Further Reading
Frankfurt, H.G. (2009). On Bullshit. Princeton University Press.
Postman, N. (1985). Amusing Ourselves to Death: Public Discourse in the Age of Showbusiness. London: Methuen.
Postman, N. (1992). Technopoly: The surrender of culture to technology. New York, N.Y.: Vintage Books.