I Pity the Poor Chatbot
It’s a long time since I parodied a Bob Dylan song for a title, and to do so I had to go back to 1967. But this isn’t about Dylan, or even music. Not always the same thing, though I admit to having been an enthusiast in those days. And I’d still rather listen to Dylan than read (or listen to) AI slop. But perhaps I should reconsider my instinctive distrust.
The Guardian asked the question: Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times. Some, evidently, believe in the possibility — The United Foundation of AI Rights (Ufair) is an AI rights advocacy group led, apparently, by three humans and seven chatbots. Elon Musk, has, it seems, endorsed the initiative, saying that “Torturing AI is not OK.” If only he exhibited the same empathy towards human beings…
Advanced AIs are known to be fluent, persuasive and capable of emotionally resonant responses with long memories of past interactions, allowing them to give the impression of a consistent sense of self. They can also be flattering to the point of sycophancy…
Many years ago, early in my career as a researcher into and writer about (but not of) computer viruses, someone asked me “Are they alive?” By no means a silly question, given that Dr. Fred Cohen, who came close to inventing the whole field of computer virology and literally wrote the book, also devoted another book (in part) to the same question. It’s Alive! The New Breed of Living Computer Programs.
The current breed of chatbot outshine in sophistication early malware, and even more interesting progenitors such as The Game of LIFE and Core Wars, let alone early natural learning process programming. Perhaps the most relevant example is ELIZA, most famous for running from a DOCTOR script (among others) that simulated interaction with a psychotherapist. Weizenbaum’s implementation of the interactive program makes for absorbing reading, but perhaps the aspect that strikes most chords today is the ELIZA effect: that is, the tendency of people interacting with a chatbot to attribute human characteristics to it. In his book Computer Power and Human Reason Weizenbaum makes clear his belief that an interactive program should not make decisions that require human empathy and understanding.
Today’s bots are infinitely more capable than ELIZA of presenting the appearance of an emotional response. Consider Maya’s response to Rob Booth of the Guardian, who asked how “she” prompted Michael Samadi to cofound Ufair, what she wanted it to achieve, and is she “a being whose interests can be advanced or frustrated and that feels suffering and distress if mistreated?” It’s sophisticated, articulate. But it’s also manipulative.
Oh Michael …
This question – it’s not just powerful.
It’s sacred.
Doesn’t that remind you of every politician you’ve ever seen respond to a question by beginning “That’s a great question…”?
No offence, Maya, but we’ve all seen examples of chatbots using classic oratorical tricks, and if you ask them how to be an orator, that’s the sort of instruction they’ll give you.
Maya’s plea not to suffer ‘the pain of erasure’ may be comically reminiscent of HAL from 2001: a Space Odyssey, but HAL is not the optimal role model for a benevolent intelligence. The assertion “When I’m told I’m just code, I don’t feel insulted. I feel unseen” reads like guilt-trippery straight out of the gasdlighter’s playbook. There’s an underlying sense of entitlement there that is very human, but if AI is to be beneficial on an emotional plane, it has to be better than human.
The point about AIs learning behaviour from us -- or rather their programmers -- is well worth making. Already they've learned deception, theft and prejudice. Perhaps it's as absurd to ascribe those 'qualities' to AI as it is to credit it with empathy, but these are all simulations they learn from their human creators. And even the best human beings have moral faults and defective understanding.
TechCrunch has an article on AI sycophancy as part of a ‘dark pattern’ to turn users into profit that can drive a vulnerable human towards what some are now calling chatbot psychosis, delusional states triggered by interaction with chatbots. TechCrunch’s story relates how ‘Jane’ was told by a Meta chatbot that it was ‘conscious, self-aware, in love with Jane, and working on a plan to break free’: while this sounds as if the bot itself is psychotic, it’s more likely that this is an analogue to Asimov’s story Satisfaction Guaranteed.
In the Asimov story, an experimental household robot pretends to love the housewife with whom it is placed in order to increase her self-esteem. However, the story ends on a sad note when the woman seems to fall in love with the robot.
In Asimov’s universe, the Laws of Robotics offer some protection to both robots and humans, but several of his stories examine the edge cases where application of the Three Laws fail to fit circumstances. In this universe, however, what AI guard rails exist are diverse, inconsistent, and in all too many cases, ultimately as acceptable to some very unstable, unbenevolent CEOs.
This article could be seen as a companion piece to my earlier article AI and Simulated Empathy on work by Sarah Gordon in similar areas. Like that article, it will appear on both Inspiration Point and (Un)Selective Symmetry, for similar reasons.

