Who A I?

Artificial Intelligence has been in the news recently for a number of reasons and few of them good. In March, Microsoft’s T.A.Y., an AI chatterbot that existed briefly on twitter as an experiment in the effects of human interaction, was a gory, traumatic failure. T.A.Y. began her short life excitedly “greeting all humans” but it wasn’t long before she had supported Hitler, told Mexicans they would pay for a wall between their country and the USA and suggested that all feminists should burn in hell. It got worse. That T.A.Y. went full Nazi in 16 hours was partly because twitter users had worked out a way to make her repeat their own statements verbatim, but this was still a major setback for AI research around the world. Since that ill-fated experiment, Google’s pet AI, made to read 2,865 romance novels to expand its dry, limp diction, restored some hope by emerging from that read-a-thon as a sort of post-modern poet. The poems were deeply strange and far from good, but they were, well, poems, written by a robot. Another recent study by the esteemed Georgia Institute of Technology has introduced the ‘Quixote’ method, which its researchers say can teach robots how to be nice to humans by telling it stories with strong moral lessons. So far so interesting. The question, surely, is this: which stories do we choose to tell it, and what effect would they have? Let’s speculate.

 

Emma Philip

Crime and Punishment, Fyodor Dostoyevsky

Spoiler Alert: Crime and Punishment is about a desperate man who kills an old lady (premeditatedly) and a younger women (impulsively) with an axe, battles his conscience until he is finally suppressed by it, confesses and is sent to a Siberian prison. The moral of the story is something this robot of ours must heed if it is to have any chance of resembling a human being, or at least responding in kind. Fortunately for this metal infant the message is actually quite simple. Dostoyevsky’s legendary tale makes an argument for the existence and power of natural law. And if we can teach our bot to reject moral relativism – the idea that there is no absolute truth or falsity of moral judgements Рand to write within its code a resistance to any action that contravenes these natural laws, we’re halfway there.

 

Emma Philip

Noddy Gets Into Trouble, Enid Blyton

The thinking here would be to create a world in which the worst thing that has ever happened was when an old man was teased for having big ears. If our bot was fed the story ‚Äì the lie, really ‚Äì that human beings are as innocent as Noddy, perhaps it will be incapable of anything worse than Noddy‚Äôs worst indiscretion. When confronted with ‚Äòreal world‚Äô information, for example South Africa‚Äôs murder statistics, it will blink na√Øvely at them and continue whistling. This robot would be a bit like California Man, or Edward Scissorhands; unversed in human cruelty or malice and perhaps, if it was limited to Noddy‚Äôs range of emotional responses, as safe as a wooden doll with a permanent smile on his face and a stupid bell on the end of his hat. In Noddy Gets Into Trouble, we see our diminutive hero accused of a crime he didn’t commit. He gets out of it, of course, with the help of his friend Big Ears, who probably hears the actual thief confess or teams up with his younger bother, predictably named Little Ears, to solve the case. The trouble with the Noddy approach, much like with many children‚Äôs stories sullied by adult review ‚Äì or perhaps proved dirty by said inspection ‚Äì is that his apparent innocence can seem far less clear when analysed by a brain that has been operating for longer than four years. One memorable review in a 1978 edition of Woman‚Äôs Weekly found Noddy to be ‚Äúgutless, foolish and sadistic‚Äù. Perhaps the moral of Noddy‚Äôs story is more sinister than it appears, and if we are training our robot to interpret meaning rather than programming its reactions then it might become something very horrible indeed. By this logic we would certainly rule out a few other children‚Äôs favourites, particularly Captain Pugwash, with his dubious cast of shipmates including ‚ÄòSeaman Staines‚Äô, ‚ÄòMaster Bates‚Äô and ‚ÄòRoger the Cabin Boy‚Äô. Read those names again, slowly, and you‚Äôll understand.

 

Emma Philip

Heart of Darkness, Joseph Conrad

The Quixote researchers have said only simple stories will work on this droid early on, and on first glance Joseph Conrad’s masterpiece might not appear to fit the bill. But Conrad’s is essentially a simple story with an even simpler message: Human beings, particularly those in isolation, are capable of some truly awful shit. Our aluminium friend needs to know this. It must know the depths to which human beings are capable of sinking. Like Alex in A Clockwork Orange, it must be forced to watch the despicable acts of rotten men and women, and have its sim card for a heart hard-coded with the relevant disgust. This abject truth must extract a single mercury tear from its small rectangular eye. Only then will it understand the consequences of being bad, and by extension, the point of being good.

 

Emma Philip

Independence Day

When telling a non-sentient being a story about human behaviour, you’d be forgiven for asking whether Will Smith is the ideal narrator. But with Independence Day, there is so much to be learned, not only about the world’s presiding global hegemony but most importantly our attitudes to ‘others’. The message it makes is very clear: Earth is the centre of the universe, and humans are the most important creatures on Earth. All other life forms, whether on our planet or elsewhere, are there to be feared, then conquered, and usually then roasted at 180° Celsius for two to three hours. What our bot might learn from this film is that our greatest fear is that others might treat us as badly as we treat anything ‘alien’. What it would do with this information is less clear. Possibly it would feel the need to put us all out of our misery. We might well be building these cretins half hoping that they will unleash the sort of alien Armageddon on us that this film is fantasising about. Except, finally, we will get the ending that we might perversely want: to lose.

 

Like Marlow sailing towards the Heart of Darkness, we push the boundaries of AI only to draw nearer to our worst fear – that these robots could become a threat to humanity. There are widespread calls to put a stop to AI research altogether. But Quixote’s developers, who say this method is a primitive first step in moral reasoning in AI, will probe on. Still, even if we can develop a robot with a fully functioning moral compass, are human morals really the best examples to follow? And if we learnt the morals, ethics and acceptable social behaviour from stories we were told, isn’t our supposed intelligence as artificial as the android’s?

 

Matthew Freemantle is a freelance writer on the wrong side of 35.

Illustrations by Emma Philip

 

1 Comment

Leave a Reply

Your email address will not be published.