“At the stall of a company called Softbank Robotics, a Frenchman was attempting to convince a four-foot humanoid to hug a three-year-old girl.
“Pepper,” he said. “Please hug the little girl.”
“I’m sorry,” said Pepper, in an appealingly childlike voice lightly inflected with a Japanese accent and genuine regret. “I didn’t understand.”
“Pepper,” said the Frenchman, with elaborate clarity and forbearance. “Can you please give this little girl a hug.”
The little girl in question, who was sullen and silent and clutching the leg of her father, did not look much like she wanted Pepper to give her a hug.
“I’m sorry,” said Pepper again. “I didn’t understand.”
I felt a sudden surge of compassion for this winsome creature, with its huge innocent eyes, its touchscreen chest, its beautiful human failure to understand.
The Frenchman smiled tightly, and bent down to the side of the robot’s head, where its auditory receptors were located.
“Pepper! Please! Hug! The girl!””
To Be A Machine, Mark O’Connell

I went to a lecture on artificial intelligence today. It was a combination of artists, filmmakers and scientists that do work in the area of — or inspired by — AI. Due to this, it was occasionally insightful and occasionally confused. It was advertised as a discussion on the emergence of racism, sexism and other nasty -isms in the arena of AI algorithms, but it was light in these areas, and more worried about Trump, Brexit and ‘fake news’ in the way that everyone slightly to the left of centre is these days. Yes, because it was a German art exhibition, there were mandatory mentions of Marx, Deleuze and Guattari but the focus was primarily on stoking fears about AI and wondering how to return to the halcyon days where only billionaire news magnates could spread falsehoods.
The stand out moment, for me, was video introduced by AI veteran Luc Steels. Steels has created a set of humanoid robots that he has designed to play word games. The robots will survey their environment, and then one of them will say a word to describe some object in the area. The other robot will attempt to understand, from context, what is being said, and give feedback. If the second robot is correct, then both robots have succeeded in communication. If it does not understand, it learns for next time. So the robot might say some nonsense word, ‘wubamami’, and the other robot will shake its head. The first robot then points to the orange box. Now the second robot has learned that ‘wubamami’ means orange box.
There was a series of videos like this. There was one where a robot would attempt to instruct the other robot to do a movement. Again there would be a nonsense word. Again the second robot would not comprehend. The robot then would show the movement, waving its arm to the right. Then the second robot would learn.
But the most affecting moment was when Steels gave a single robot a mirror. There is, Steels explained, a lot of free ‘play’ time given to his two robots, so they might learn independently. The robot inspecting itself with a large circular mirror bolted to its hand, as it glared with incomprehension at itself, was a truly poignant moment. Steels said the point of the exercise was to give the robot some concept of a ‘self’.
I led with the quote from Mark O’Connell’s superlative To Be A Machine because it was the first thing that came to mind as I watched the robot grapple with identity. In Mark’s book he goes to visit a DARPA robot competition, which mostly involved expensive hardware toppling over or failing to do basic tasks. Mark explained that this was funny, not because they were robots, but because they were human enough to fail.
“I found the robots’ pratfalls comical, in other words, not simply because in their forms and their failures they resembled humans, but because they reflected the strange sense in which humans were themselves mere machines.”
The mirror test on animals, the one that is supposed to enlighten us to the extent of animals’ cognition, doesn’t reveal much more than recognition. While an animal might recognise itself, we don’t know what it does with that information. A dog’s idea of self is probably not as bundled with histories, narratives and concepts as a human’s is.
Steel’s experiment, he said, is to attempt to unpick meaning. By the AI creating its own language, through simple games, he is hoping to understand how an AI constructs meaning — while simultaneously creating an AI that has a concept of meaning. The algorithm that serves you your recommended Facebook articles has no concept of meaning, it only has a concept of virality. And this, Steel argues is how we get fake news. Teach an AI meaning, and we start to arrive at a point where we can get the AI to consider truth claims.
But the AI is itself a mirror, of a type. Steel’s AI plays games and its playful nonsense is creating a language of a certain type. But the machine learning AIs that drive Facebook and Google are not playing, they are the hard edge of capitalist innovation. And they mirror that reality. If Steel, instead of having two platonic robot pals describe items to one another, had one robot require items to survive and one robot required to obey the orders of the other, the language would be entirely different. This master/slave relationship would indelibly texture the language, and their meaning.
This brings us back to the robot inspecting itself in its mirror. Lacan argues that the moment of self-recognition is the moment of self-alienation. The simultaneous revulsion and attraction we experience at noticing ourselves, a subject distinct from the rest of reality, is the traumatic birth of the self. As the robot looked at itself and attempted to create a language to convey its notion of personhood, it ran up against the very Lacanian barrier of meaning. Words simply cannot convey the full subjectivity of experience. And so, whether AI can adjudicate on truth claims or not, meaning will always be a slippery question, as language is so very often insufficient. Any robot, no matter how well it can tell other robots that a box is orange, is going to run into the very same issues and subjectivities that any human would.
So as the robot stood, inspecting itself, butting up against the insufficiency of language as it attempting to articulate what it saw, it — unbeknownst to itself — was tangled within the futility and pointlessness of its own existence. And is there anything more human than that.





