The Shadow Side of AI Part 2: Hallucinating Humans
As news headlines start to recognize the great cycle of AI hype engulfing us all, this post examines why we might be so eager to fall for the tech bros' machinations.
In a previous post I recounted my troubled history with social media. Like many people today, I also quite instinctively feel an aversion towards much of the output of the so-called mainstream media houses. Or as
put it quite tersely:I agree with Bill. In more ways than one. Finding that delicate balance between keeping informed and over-consuming what we call news is well, quite tricky. On occasions I’ve even gone on a full news fast. A closed retreat. Full radio silence. Emerging from such retreats, I try my best to be as discerning as my conditioning allows me. It is important to filter who and what we choose to bring into our worlds. If we don’t, as Bill reminds us, we can quickly become sick. To mitigate this unavoidable risk we need to polish and hone our lenses of discernment. To spot the grifters, the bullshit artists and, and how can I put this? The idiots out there. You might have encountered them. At times, I spot a prime one when I sneak a glance in the mirror.
There is a particular aspect of this idiocy that is quite rampant amongst us humans. I’m not sure if it is a feature or a bug. Often, we just can’t stop ourselves. I’m referring to our ability to regurgitate news, gossip or information without thought or reason. The clever amongst us, the storytellers, can embellish this input data in entertaining and attention-grabbing ways. They hallucinate a bit to tell a good story. The dull-witted amongst us, might also unknowingly hallucinate a few details here and there as our memory degrades.
My memory may also be decaying, but I do seem to remember there was a time when we could depend on quality journalism, safe in the knowledge that sources would be checked, experts would be consulted, the I’s would be dotted, and the t’s would be crossed. I may also be hallucinating here, but with the corporatization and capture of the media by billionaire backers, my ability for epistemic judgement has been severely compromised. Attention grabbing flourishes have proven to be more lucrative than insightful facts. To use an idiom I grew up with, it seems we are now in a position where “we can’t see the wood for the trees”. Unfortunately, this leaves us with Bill’s sickening mess. A mess that has reduced our trust in authority, in the world, and ultimately in each other.
For some of us, regurgitating information acts as a purgative, or as my better half likes to put it, we get to shake our energy. In the tech world we have a saying: garbage-in, garbage-out. When my mouth starts to utter, it is often only after the case, in hindsight that I recognize the thoughtlessness of my words. A thoughtlessness that at times reveals gross insensitivity or plain idiocy. My automatic utterances certainly do not signify intelligence. At best they are informative, at worst they are excruciatingly embarrassing. I can’t be sure because I have never used it, but I always thought this human trait was a main factor in the success of platforms like Twitter. Language as a communication tool only becomes intelligent when it is embodied in real world situations and allows us to engage with other living beings in sensitive and insightful ways.
It therefore seems quite preposterous to me that the current crop of automatic text producing algorithms, commonly called Large Language Models or LLMs are being marketed as intelligent in any way at all. I could argue that it is precisely because such algorithms lack such intelligence that they are of any use to us. It is of course down to humans alone to use them in intelligent or stupid ways. We can’t simply abnegate our responsibility here.
Reflecting on the endless noise on the topic of AI, reminds me of a group of excited children circling a pile of presents around a Christmas tree. Jumping up and down in great anticipation. What are we going to get!? It is only through the benefit of hindsight, or with the support of hard-earned experience, can we hope to immunize ourselves from such intoxication. The adults in the room silently watch the children and share knowing looks. Similarly, we need to take a level-headed view of the spectacle unfolding around AI. But to reach that position of sobriety we need to first remove the fancy wrapping that these AI tools come covered in. Wrapping that promises so much. Wrapping, not only put there by the tech bros marketing these tools, but also put in place by our own wishful thinking. But before I get carried away here, let’s go back to our friend Bill.
Tibet, the Chinese and Bill
It was through algorithmic happenstance that Bill Bishop came to my attention. He was one of my first subscribers. Not accustomed to social media, I was a little bit discombobulated by the rush of dopamine I received getting a new subscriber. I nonetheless pulled myself together, splashed some cold water on my face, and checked out his profile. My clicks revealed his credentials as a popular talking head specializing in all things Chinese. I had just written a post containing the words ‘Chinese’ and ‘Tibet’ and I guessed that algorithmic machinations chose to present my work to him. With no burning questions on China I was about to click away from Bill’s profile when I noticed another perhaps lesser-known side to Bill.
Tashi the Golden Doodle
Bill, showing off his artistic credentials, had created a Substack about his adorable Golden Doodle Tashi. The tagline describes how ‘Tashi has figured out to use ChatGPT to create a newsletter of his doggie desires and thoughts’. No spoilers here, but if you are interested you can check out a post where Tashi takes Bill’s car and goes off on a joyride with his girlfriend Lola.
Hallucinations as harmless fun
Anthropomorphism, the process of attributing human traits, emotions, or intentions to non-human entities such as Bill’s dog Tashi is as old as human culture itself. Bill’s post filled me with nostalgia for all the movies I watched as a child where this literary device was used to skillfully incorporate animals into human storylines. Conditioned by these movies in our early years, we usually don’t think twice about such irrational maneuvers. In fact, most of us see these depictions as harmless fun. And of course they are. I mean, just look at Tashi. Priceless.
Science fiction and the rise of the machine
I was brought up not only watching movies showing animals taking on human traits but also many sci-fi movies that imputed such qualities on machines. Our culture is saturated with examples of intelligent machines or robots entering the human realm. Only a few weeks ago, I watched scenes showing the robot B2EMO in the Star Wars Series Andor. ‘Inconsolable’ it was, after the death of its human companion, requiring the care and consideration of the human cast members to help it face its ‘grief’. I laughed (sorry, me bad) when the dialog showed humans wanting to stay with the robot to provide ‘emotional support’.
Just as we enter the world of make-believe projecting human qualities on animals in children’s stories like The Jungle Book or more adult-themed books such as Animal Farm, we don’t think twice about accepting machines as intelligent, playing either benevolent or malevolent roles as storylines unfold. These stories are deeply embedded in our culture and provide us with a strong bias that leaves us vulnerable to manipulation.
Winter is coming
Melanie Mitchell in her 2021 paper, “Why AI is Harder Than We Think”1, takes us on an historic tour detailing the ups and downs in the field of Artificial Intelligence. She recounts:
“Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”)”
Melanie describes a spring where people are:
“awash with confident optimism about the near future of machine intelligence…governments and companies would get caught up in the enthusiasm, and would shower the field with research and development funding.”
When Melanie received her PHD in 1990, AI was starting to fall out of favor, and she was advised by her tutors not to use the term “Artificial Intelligence” on her job applications. It was around this time that I found myself at university, also studying Artificial Intelligence. My professors at the time presented a sober and skeptical view of the topic, reflecting sentiments triggered by the approaching AI winter. In my professional career that followed, the topic of AI was shelved, and I specialized in a related field, mining data, and building data warehouses to increase so-called ‘business intelligence’. AI remained for me an academic curiosity and did not come back into my radar until the dawn of a new AI spring a few years back.
Melanie’s paper provides a few good arguments why we might take the AI hype surrounding us today with a generous pinch of salt. Much of this hype relates to people hallucinating AI futures based on technology we are told is ‘just around the corner’. It’s a double hallucination. An hallucination of an hallucination. I’m not a psychiatrist, but I’m sure there is some pathology describing this type of psychosis. Getting people worked up about the possible effects of possible technology is not in my eyes rational or intelligent at all. At best it is human folly and at worst it is a cynical distraction from the real harm apparent with the mass roll-out of the current crop of Generative AI tools. Harms we are starting to experience right now.
Since the 1950s there have been people predicting that machines with human-like qualities are ‘just around the corner’. This is nothing new. The idea of tech bros running around the world whispering into politicians ears, describing their dystopian hallucinations, and simultaneously offering themselves up as potential altruistic world saviors, would be comical if it wasn’t happening before our very eyes. If we sweep these CEOs to one side, Melanie provides us with an academic litmus test on this commotion,
“In surveys of AI researchers carried out in 2016 and 2018, the median prediction of those surveyed gave a 50 percent chance that human-level AI would be created by 2040–2060, though there was much variance of opinion, both for sooner and later estimates”
I doubt, despite current news headlines painting quite a different picture, that this range of opinions has changed much since then. Looking closely at the current crop of AI tools, I must admit, I fall into the highly skeptical segment in this debate. Not based solely on my own particular historical perspective, but also because I feel quite underwhelmed with the abilities of Generative AI. Yes, it is clever tech, maybe there will be some interesting applications for it in areas like medical research. Investors are excited, but they were also excited with that joker Sam BankMind-Fryup. Time will tell. Perhaps, Generative AI will help overcome the tedium of writing jargon-filled documents in many workspaces. I’m not against the idea of relieving the burden of any bullshit jobs that come our way. Not ruling out these possible use-cases, I am certainly more concerned about how these tools can be weaponized to create more chaos in our already over complex lives. Problems I will outline in more detail in my next post.
Why do we buy into the hype?
One reason of course is we are not disembodied purely rational machines, imagined by futurists in their descriptions of artificial general intelligence (AGI). They describe a type of rational monstrosity cut off from the messy world of emotional physicality and instinctual drives that we all experience directly in life. AI algorithms can only compute our symbolic descriptions and measurements of life. They do not and will never “know” what those descriptions refer to, but simply model and regurgitate those descriptions in clever ways. They “hallucinate” our symbolic hallucinations. The output appears to us like magic, and triggers deeply embedded tropes, that cause us to unquestioningly impute agency onto these machines. But this willingness to anthropomorphize machines has been conditioned into us through storytelling and repeatedly in movies since childhood. A veritable festival of hallucinations.
Taken in by such festivities, these Generative AI applications have captured the attention of many outside the tech world, and intoxicated a few misguided ones within it too. I can’t blame them, I sometimes mimic these tools, and quite often say dumb things without realizing what I am saying. But at least I’m not stupid enough to post that stuff on Twitter.
The System Reboot relies on the support and encouragement of readers like you. Substack's algorithms favor posts that are engaged with by readers. Small acts of kindness make a big difference to new authors and small publications like this one. Like, share, restack, recommend, or comment if you find value in my writing. To automatically receive updates and new posts, consider becoming a subscriber. All words are hand crafted without the assistance of AI.
This fascination has been going on for decades. I remember seeing old movies and tv series with robot sidekicks, and computer overseers looking out for the protagonist wellbeing. It will be interesting to see applications of it. The dog story was amusing.
Mad functionalists aside, I daresay any notion of actual consciousness in machines will die out. My big worry is that some may come to *prefer* the direct, agreeable nature of exchanges with machines, over the infuriating, indirect ones with, say, their children and spouses (those incurable energy shakers).
Also, if you habitually engage in fully legitimised exploitative, manipulative behaviour with machines, does it not work its way into our interactions with eachother? Will Kant be turning in his grave as we come to treat everyone, human and machine, as means to our own selfish ends?
Moral philosophy needs to replace maths in schools.