Can — or should — we humanise artificial intelligence (AI)? Why is this such a vital question for all of us?
I am frequently invited to discuss and/or debate the potential to humanise artificial intelligence (AI). Will it be possible to imbue AI with human qualities, sufficient to dupe us or equal us? Worse yet, will it supersede us? And judging by the reactions in the audiences, this question hits a chord. Most of the time, I sense anxiety around how threatening AI is. I believe at the core is an existential angst. To aliment my reflections and understanding of this grey zone, beyond a long reading list, I’ve attended numerous talks, sessions and exhibitions in an effort to see and experience as much as possible the interaction with state of the art AI (e.g. Empathic Futures, AI More than Human at the Barbican, and Latent Spaces Illusionaries in Canary Wharf). I’m far from an academic on the topic of humanity, so I’m relying on the diversity of others’ perspectives and my own portfolio of experiences to delve into this topic. Let’s rumble!
So, what makes us human?
To get to the bottom of this question, whether we can or should humanise AI, we need to answer what it actually means to be human? How will we be able to evaluate, much less measure, ‘humanity’ in a machine? Similar to the work I’ve been doing around empathy, there are unsolved challenges around evaluating and measuring soft, fuzzy skills like empathy. For that matter, does one really need to measure such human qualities as empathy, faith, love and intuition? As I wrote about in Heartificial Empathy, we love to measure stuff. It’s almost a sine qua non in business. But what of measuring humanity? In an article penned in 2008, the journalist and Managing Editor of The Observer, Simon Caulkin, summed up academic VF Ridgway’s paper of 1956: “What gets measured gets managed — even when it’s pointless to measure and manage it, and even if it harms the purpose of the organisation to do so. . . . Just because you can measure it, doesn’t mean you should.” {source: The Observer} But, whether or not one can measure our humanity, the underlying question remains absolutely relevant: What makes us human?
Human as machine
Here are two angles by which to approach the question of what defines us as human beings. (1) What makes us human versus an animal? And (2) what makes us human compared to a machine? I’m not going to tackle the first question as that’s beyond my competency, but rather I wish to address the latter. To do so, one needs to define what is a machine. It’s worth noting that there’s a whole philosophical debate around the idea that the human being is also a machine. The definition of a machine is frequently articulated in the following manner (as provided, among others, by Wikipedia): “A machine is a physical system using power to apply forces and control movement to perform an action.” Moreoever, there is such a thing as a biological molecular machine or nanomachines. As described in Wikipedia, “Molecular machines are a class of molecules typically described as an assembly of a discrete number of molecular components intended to produce mechanical movements in response to specific stimuli, mimicking macromolecular devices such as switches and motors.” There is a whole field of study trying to create artificial molecular machines. The line between machine and man is therefore not so clear.
Countervailing beliefs and systems
It follows — and it’s not a big stretch to say — that we human beings are physical specimens that use [naturally derived] power to complete a task. Moreover, it’s not without a bit of irony that a machine in the 17th century could also be used to describe a [human-devised] scheme or plot. In today’s vernacular, we use the term “machination” to describe a scheme. To add to the confusion, we have the transhumanist movement that is looking to enhance and prolong our lives via technology in the form of the augmented human being. Think of human beings with all sorts of bionic implants and devices. I harken back to the series with Steve Austin in The Six Million Dollar Man, which brought that idea to life in my imagination. The blur between human and machine has been the subject of a raft of books, where I’d cite in particular, Mary Shelley’s Frankenstein, and Philip Dick’s Do Androids Dream of Electric Sheep (the book that inspired The Blade Runner film). In transhumanism, there’s a strong desire to defy death and cure aging. It’s estimated that the anti-aging/death-defying industry will become somewhere approaching $600 billion by 2028, according to the 2023 Precision Report “Longevity and Anti-senescence Therapy Market” Research. In any event, money keeps pouring into that sector to the tune of around $7B in both 2021 and 2022, albeit slower in 2023 [Source]. Meanwhile, on the other end of the spectrum, we have animism, a belief that objects, places and plants have a spiritual essence. Animism basically gives animus to the inanimate. What of machines, then? With all these differing beliefs, concepts and systems, it’s no wonder the debate rages on as to the humanisation of robots and AI.
But, for now, let’s focus on what we typically consider to be human. There is an almost sacred list of qualities or traits that are frequently designated as ‘uniquely’ human. According to Thomas Kaye, whose article was published on the World Bank site, these qualities include:
“Creativity, empathy, knowledge sharing, leadership, listening, resilience, teamwork, and the ability to cope with ambiguity and uncertainty.”
Kaye cautioned that this was not a comprehensive list, and I agree. At a minimum, I would add the following qualities (again, even with these, this will not equate a complete list):
Intuition, awareness of mortality, greed, suicide, cruelty, gluttony, untethered altruism (aka random acts of kindness), foolish courage, and neuroticism.
The point here is not try to make a definitive list of uniquely human qualities as, in any event, each term needs some qualification. Rather, it is to highlight the existential nature of the threat that artificial intelligence poses. What if we were able to encode each of these traits? I suggest these four important questions to ponder:
- Is humanity something that can be evaluated from the outside or must it be felt from within to exist? In other words, can we exist outside of our being? For example, if an emotion is not embodied, is it truly an emotion? (which leads us into the hard question of consciousness)
- To what extent is it more important to consider the potential for consciousness or emotions of the machine versus considering our own feelings and existential questions in front of a machine/robot? It’s clear today that many of us have already established strong, so-called synthetic relationships with and through technology. From the connected car to your smartphone, AI is already embedded in our lives. Should we not be looking our own utilisation of tech and the state of society, before condoning or condemning AI?
- Does free will exist? If it does, how does one prove it? And if it doesn’t (as Sam Harris among others has argued), this suggests that everything is essentially pre-coded.
- When we talk about ‘humanisation’, at what point between 1% and 100% do we consider it achieved? In other words, must or should AI seek to be totally (i.e. 100%) human or is it about it having more human qualities? In which case, we must accept that in certain capacities and situations, AI is already surpassing us in our own humanity.
When you digest these questions, are you left feeling more or less positive about our unique human qualities? What about the prospects for AI? Considering how some people are acting (e.g. terrorist atrocities, war, divisive politics and plain incivility in society), it is easy to imagine an AI that acts “better” than us. For example, in the rather prosaic case of stressed and rules-based customer service agents, a better trained and well-programmed AI can be rendered more empathic and provide better service than some agents. At the very least, it’s already a fact that AI can help augment the empathy and effectiveness of customer service agents (e.g. DigitalGenius at KLM). But can AI perform — even outdo us — when it comes to our human qualities? We’ve also seen how AI can be applied to provide better diagnoses than experienced doctors for certain pathologies. As Lee Bottomley reported for the BBC in December 2023, a University of Warwick study showed that AI was just as — or more — accurate than doctors in 35 out of 37 conditions (scanning X-rays) [Source BBC]. So, there are clearly many skills that will be replaceable with AI, even high-level and sophisticated activities and analyses. But what of the so-called unique human qualities?
If I were to isolate the most regularly discussed distinguishing feature of our human-ness, it would be creativity. So, let’s explore that quality further to see if/how AI could be creative. Let’s start with a definition.
What is creativity?
According to Britannica, creativity is “the ability to make or otherwise bring into existence something new, whether a new solution to a problem, a new method or device, or a new artistic object or form.” In this definition, there is no ownership of creativity by a human being. I have long believed that within the creative spirit lies a certain mercurial sense of chaos. We often equate creativity to originality. But every creative act, bar none, has come on the back of some other act. Even Albert Einstein, probably the most emblematically creative scientist, didn’t invent his theories out of thin air. He himself said in 1922 that he stood on the shoulders of [James Clerk] Maxwell, not Newton {source}. Other than perhaps the immaculate conception (trying to be funny), I can’t come up with an act of creativity that wasn’t built on a precedent. As with so many quotes out there, the phrase “Creativity is intelligence having fun” is not only misattributed to Einstein, it’s an inaccurate citation. The original quote, whose attribution to George Scialabba can be backed up, is: “Perhaps imagination is only intelligence having fun.” {Written in 1984 March-April, Harvard Magazine, Volume 86, Number 4, The Browser: Mindplay by George Scialabba, (Book Review of Howard Gardner’s “Frames of Mind: The Theory of Multiple Intelligences” — Source}. But who’s to judge or experience the fun? Necessarily the imaginer or the one benefiting from the fruits of that imagination? This takes me to a rather powerful example of an AI that is capable of a rather remarkable — dare I say creative — feat. I recently asked ChatGPT to be creative via a specific prompt.
MY PROMPT: Can you write a 10-line soliloquy, using Shakespearen style, and the voice of Ernest Hemingway talking about the state of politics today, the lost values of the Greatest generation of World War II, and how space may be the final frontier?
Chat-GPT took no longer than 15 seconds to come up with the following soliloquy:
In the tempestuous sea of politics today,
Where truth is drowned in partisan sway,
I long for the values of a bygone age,
When honor reigned on war's bloodied stage.
The Greatest Generation, they stood so tall,
Defending freedom against tyranny's thrall,
Their sacrifices etched in history's tome,
A legacy that should never be entombed.
But now, the world seems lost in moral haze,
Where greed and power fuel divisive ways,
The compass of integrity astray,
As truth and reason fade away.
Oh, for the clarity of starlight's gleam,
To guide us through this political mayhem,
Perhaps in space, where stars ignite the night,
We'll find the values that set our course aright.
Other than the fact that Chat-GPT wrote 16 lines of text versus the requested 10, what do you think of this output? Please put in the comments below your judgment — on a scale of 0 (not at all) to 10 (maximum) — how creative you believe this soliloquy to be?
Next I wanted to see if an image, generated (or created) by AI, with a similar prompt could speak a thousand words. I first tried to use Bing’s AI image generator. I note that their interface provided three options, including a “more creative” conversation style. I was naively optimistic about the potential.
My prompt was only slightly modified:
In the style of Shakespeare and the voice of Ernest Hemingway, create an image that refers about the state of politics today, the lost values of the Greatest generation of World War II, and how space may be the final frontier.
To my dismay, Bing saw in my prompt a breach of copyright and warned that I’d be banned from the service. I pleaded innocence, with no response back so far. But there you go, my naiveté was soundly dashed. So, I decided to use DeepAI instead, with the exactly same prompt. Again, the output very much depended on the inputs. Not only does the prompt need to be effectively worded, DeepAI provides a suite of gallery styles, shape options and different visual effects from which to choose. And of course, there’s also the tagged data set that it’s using… i.e. the shoulders of giant datasets on whom this creative output depends. Here is what DeepAI produced. Again, using the scale above, how would rate DeepAI on its creativity with this image?
Now, going back to the initial question posed at the outset of this article, can we humanise artificial intelligence? What do you think?
My answer is a nuanced yes. It is absolutely possible to render a machine at times more empathic or creative, or more able to share knowledge, to be more resilient, even greedier, or more neurotic than some of us human beings — at least at times and in different contexts — providing it’s programmed with enough well-tagged learning data sets. To the second part of the question, whether we ought to humanise AI, this is a far bigger question. To this, I answer in the same way that I answer the potential for artificial intelligence, in general. That is to say, we are bound to get a range of good and bad and everything in between, much like we do in our world today among us human beings. Should we humanise AI? Again, it’s a nuanced yes. It really depends on the why and by whom. I use a five-point grid to help evaluate any artificial intelligence projects or initiatives.
How to evaluate AI
- Intention — What is the organisation’s overall intention with the AI? Is the intention explicit or opaque? Is the intention honorable and/or noble, transactional or undefined or, worse yet, malevolent?
- Business Model — How are they financing the endeavour and to what end? Who owns or decides? What is their desired Return on Investment and how important is the drive for profitability?
- Compliance — To what extent is the artificial intelligence compliant with the relevant laws? How transparent are the explanations and how complete are the permissions? Are the learning data sets appropriately set up?
- Ethics — What is the ethical framework of the organisation? How accountable is the leadership to this framework?
- Trust — At the base, holding the key to everything, how much trust is there? How trustworthy is the corporation or organisation that owns and operates it? What about the government / governing regime? And the independence and power of the oversight bodies (e.g. FTC or Competition and Markets Authority)?
By reviewing the organisation that owns and/or runs the AI through the spectrum of these five criteria, you can better gauge the AI project/initiative.
So, what do you believe? Can we humanise AI? Should we, even if we could? I’d love to read your comments!
First published here on MinterDial.com.